Canonical Voices

Posts tagged with 'cloud'

Robbie Williamson

The following is an update on Ubuntu’s response to the latest Internet emergency security issue, POODLE (CVE-2014-3566), in combination with an
SSLv3 downgrade vulnerability.

Vulnerability Summary

“SSL 3.0 is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0, TLS 1.1, and TLS 1.2, many TLS implementations remain backwards­ compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.” -https://www.openssl.org/~bodo/ssl-poodle.pdf

A vulnerability was discovered that affects the protocol negotiation between browsers and HTTP servers, where a man-in-the-middle (MITM) attacker is able trigger a protocol downgrade (ie, force downgrade to SSLv3, CVE to be assigned).  Additionally, a new attack was discovered against the CBC block cipher used in SSLv3 (POODLE, CVE-2014-3566).  Because of this new weakness in the CBC block cipher and the known weaknesses in the RC4 stream cipher (both used with SSLv3), attackers who successfully downgrade the victim’s connection to SSLv3 can now exploit the weaknesses of these ciphers to ascertain the plaintext of portions of the connection through brute force attacks.  For example, an attacker who is able to manipulate the encrypted connection is able to steal HTTP cookies.  Note, the protocol downgrade vulnerability exists in web browsers and is not implemented in the ssl libraries.  Therefore, the downgrade attack is currently known to exist only for HTTP.

OpenSSL will be updated to guard against illegal protocol negotiation downgrades (TLS_FALLBACK_SCSV).  When the server and client are updated to use TLS_FALLBACK_SCSV, the protocol cannot be downgraded to below the highest protocol that is supported between the two (so if the client and the server both support TLS 1.2, SSLv3 cannot be used even if the server offers SSLv3).

The recommended course of action is ultimately for sites to disable SSLv3 on their servers, and for browsers to disable SSLv3 by default since the SSLv3 protocol is known to be broken.  However, it will take time for sites to disable SSLv3, and some sites will choose not to, in order to support legacy browsers (eg, IE6).  As a result, immediately disabling SSLv3 in Ubuntu in the openssl libraries, in servers or in browsers, will break sites that still rely on SSLv3.

Ubuntu’s Response:

Unfortunately, this issue cannot be addressed in a single USN because this is a vulnerability in a protocol, and the Internet must respond accordingly (ie SSLv3 must be disabled everywhere).  Ubuntu’s response provides a path forward to transition users towards safe defaults:

  • Add TLS_FALLBACK_SCSV to openssl in a USN:  In progress, upstream openssl is bundling this patch with other fixes that we will incorporate
  • Follow Google’s lead regarding chromium and chromium content api (as used in oxide):
    • Add TLS_FALLBACK_SCSV support to chromium and oxide:  Done – Added by Google months ago.
    • Disable fallback to SSLv3 in next major version:  In Progress
    • Disable SSLv3 in future version:  In Progress
  • Follow Mozilla’s lead regarding Mozilla products:
    • Disable SSLv3 by default in Firefox 34:  In Progress – due Nov 25
    • Add TLS_FALLBACK_SCSV support in Firefox 35:  In Progress

Ubuntu currently will not:

  • Disable SSLv3 in the OpenSSL libraries at this time, so as not to break compatibility where it is needed
  • Disable SSLv3 in Apache, nginx, etc, so as not to break compatibility where it is needed
  • Preempt Google’s and Mozilla’s plans.  The timing of their response is critical to giving sites an opportunity to migrate away from SSLv3 to minimize regressions

For more information on Ubuntu security notices that affect the current supported releases of Ubuntu, or to report a security vulnerability in an Ubuntu package, please visit http://www.ubuntu.com/usn/.

 

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
Prakash Advani

An independent survey of 200 UK-based CIOs has revealed that they are only using about half of the cloud capacity they’ve bought and paid for, and that 90 percent of them see over-provisioning as a necessary evil.

Cloud provider ElasticHosts, which commissioned the survey, says: “Essentially, bad habits like over-provisioning and sacrificing peak performance are being carried from the on-premise world into the cloud, partly because people are willing to accept these limitations.”

Read More: http://www.zdnet.com/cloud-customers-are-still-paying-for-twice-as-much-as-they-need-7000033369/

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Ben Howard

For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day.  Each of these builds is given a serial in the form of YYYYMMDD. 

While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.

When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.

What if we build the Cloud Images when the package set changes?

With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.

Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.

Read more
Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
David Murphy (schwuk)

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can't help but have your attention grabbed by it!

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.


  1. Or make one out of wood like my colleague Gavin did! 

Read more
Prakash Advani

The company has pledged to invest $1 billion in open cloud products and services over the next two years, along with community-driven, open-source cloud technologies.

“Just as the community spread the adoption of Linux in the enterprise, we believe OpenStack will do the same for the cloud,” said Hewlett-Packard CEO and President Meg Whitman, in a webcast announcing Helion Tuesday.

Read More

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Mark Baker

Ubuntu 14.04 LTS

Today is a big day for Ubuntu and a big day for cloud computing: Ubuntu 14.04 LTS is released. Everyone involved with Ubuntu can’t help but be impressed and stirred about the significance of Ubuntu 14.04 LTS.

We are impressed because Ubuntu is gaining extensive traction outside of the tech luminaries such as Netflix, Snapchat and wider DevOP community; it is being adopted by mainstream enterprises such as BestBuy. Ubuntu is dominant in public cloud with typically 60% market share of Linux workloads in the major cloud providers such as Amazon, Azure and Joyent. Ubuntu Server also is the fastest growing platform for scale out web computing having overtaken CentOS some six months ago. So Ubuntu server is growing up and we are proud of what it has become. We are stirred up by how the adoption of Ubuntu, coupled with the adoption of cloud and scale out computing is set grow enormously as it fast becomes an ‘enterprise’ technology.

Recently 70% of CIOs stated that they are going to change their technology and sourcing relationships within the next two or three years. This is in large part due to their planned transition to cloud, be it on premise using technologies such as Ubuntu OpenStack, in a public cloud or, most commonly, using combinations of both. Since the beginning of Ubuntu Server we have been preparing for this time, the time when a wholesale technology infrastructure change occurs and Ubuntu 14.04 arrives just as the change is starting to accelerate beyond the early adopters and technology companies. Enterprises now moving parts of their infrastructure to cloud can choose the technology best suited for the job: Ubuntu 14.04 LTS:

Ubuntu Server 14.04 LTS at a glance

  • Based on version 3.13 of the Linux kernel

  • Includes the Icehouse release of OpenStack

  • Both Ubuntu Server 14.04 LTS and OpenStack are supported until April 2019

  • Includes MAAS for automated hardware provisioning

  • Includes Juju for fast service deployment of 100+ common scale out applications such as MongoDB, Hadoop, node.js, Cloudfoundry, LAMP stack and Elastic Search

  • Ceph Firefly support

  • Openvswitch  2.0.x

  • Docker included & Docker’s own repository now populated with official     Ubuntu 14.04 images

  • Optimised Ubuntu 14.04 images certified for use on all leading public cloud     platforms – Amazon AWS, Microsoft Azure, Joyent Cloud, HP Cloud, Rackspace Cloud, CloudSigma and many others.

  • Runs on key hardware architectures: x86, x64,  Avoton, ARM64, POWER Systems

  • 50+ systems certified at launch from leading hardware vendors such as HP, Dell, IBM, Cisco and SeaMicro.

The advent of OpenStack, the switch to scale out computing and the move towards public cloud providers presents a perfect storm out of which Ubuntu is set to emerge the technology used ubiquitously for the next decade. That is why we are impressed and stirred by Ubuntu 14.04. We hope you are too. Download 14.04 LTS here

Read more
John Zannos

Canonical and Cisco share a common vision around the direction of the cloud and the application-driven datacentre.  We believe both need to quickly respond to an application’s needs and be highly elastic.

Cisco’s announcement of an open approach with OpFlex is a great step towards to an application centric cloud and datacenter. Cisco Application Centric Infrastructure policy engine (APIC) makes the policy model APIs and documentation open to the marketplace. These policies will be freely usable by an emerging ecosystem that is adopting an open policy model. Canonical and Cisco are aligned in efforts to leverage open models to accelerate innovation in the cloud and datacenter.

Cisco’s ACI operational model will drive multi-vendor innovation, bringing greater agility, simplicity and scale.  Opening the ACI policy engine (APIC) to multi-vendor infrastructure is a positive step to open source cloud and datacenter operations.  This aligns with the Canonical open strategy for the cloud and datacenter.  Canonical is a firm believer in a strong and open ecosystem.  We take great pride that you can build an OpenStack cloud on Ubuntu from all the major participants in the OpenStack ecosystem (Cisco, Dell, HP, Mirantis and more).  The latest OpenStack Foundation survey of production OpenStack deployments found 55% of them on Ubuntu – that’s over twice the number of deployments than the next operating system. We believe a healthy and open ecosystem is the best way to ensure great choice for our collective customers.

Canonical is pleased to be a member of Cisco’s OpFlex ecosystem.  Canonical and Cisco intend to collaborate in the standards process. As the standard is finalised, Cisco and Canonical will integrate their company’s technology to improve the customer experience. This includes alignment of Canonical’s Juju and KVM with Cisco’s ACI model.

Cisco and Canonical believe there are opportunities to leverage Ubuntu, Ubuntu OpenStack and Juju, Canonical’s service orchestration, with Cisco’s ACI policy-based model.  We see many companies moving to Ubuntu and Ubuntu OpenStack that use Cisco network and compute technology. The collaboration of Canonical with Cisco towards an application centric cloud and datacenter is an opportunity for our mutual customers.

Read more
Mark Baker

It is pretty well known that most of the OpenStack clouds running in production today are based on Ubuntu. Companies like Comcast, NTT, Deutsche Telekom, Bloomberg and HP all trust Ubuntu Server as the right platform to run OpenStack. A fair proportion of the Ubuntu OpenStack users out there also engage Canonical to provide them with technical support, not only for Ubuntu Server but OpenStack itself. Canonical provides full Enterprise class support for both Ubuntu and OpenStack and has been supporting some of the largest, most demanding customers and their OpenStack clouds since early 2011. This gives us a unique insight into what it takes to support a production OpenStack environment.

For example, in the period January 1st 2014 to end of March, Canonical processed hundreds of OpenStack support tickets averaging over 100 per month. During that time we closed 92 bugs whilst customers opened 99 new ones. These are bugs found by real customers running real clouds and we are pleased that they are brought to our attention, especially the hard ones as it helps makes OpenStack better for everyone.

The type of support tickets we see is interesting as core OpenStack itself only represents about 12% of the support traffic. The majority of problems arise between the interaction of OpenStack, the operating system and other infrastructure components – fibre channel drivers used by nova volume, or, QEMU/libvirt issues during upgrades for example. Fixing these problems requires deep expertise Ubuntu as well as OpenStack which is why customers choose Canonical to support them.

In my next post I’ll dig a little deeper into supporting OpenStack and how this contributes to the OpenStack ecosystem.

Read more
Sally Radwan

A few years ago, the cloud team at Canonical decided that the future of cloud computing lies not only in what clouds are built on, but what runs on it, and how quickly, securely, and efficiently those services can be managed. This is when Juju was born; our service orchestration tool built for the cloud and inspired by the way IT architects visualise their infrastructure: boxes representing services, connected by lines representing interfaces or relationships. Juju’s GUI simplifies searching, dragging and dropping a ‘Charm’ into a canvas to deploy services instantly.

Today, we are announcing two new features for DevOps seeking ever faster and easier ways of deploying scalable infrastructure. The first are Juju Charm bundles that allow you to deploy an entire cloud environment with one click. Secondly we are announcing Quickstart which spins up an entire Juju environment and deploys the necessary services to run Juju, all with one command. Juju Bundles and Quickstart are powerful tools on their own but offer enormous value comes when they are used together: Quickstart can be combined with bundles to rapidly launch Juju, start-up the environment, and deploy an entire application infrastructure, all in one action.

Already there are several bundles available that cover key technology areas: security, big data, SaaS, back office workloads, web servers, content management and the integration of legacy systems. New Charm bundles available today include:

Bundles for complex services:

  • Instant Hadoop: The Hadoop cluster bundle is a 7-node starter cluster designed to deploy Hadoop in a way that’s easily scalable. The deployment has been tested with up to 2,000 nodes on AWS.

  • Instant Mongo: Mongodb, a 13-node (over three shards) starter MongoDB cluster and has the capability to horizontally scale all of the three shards.

  • Instant Wiki: Two Mediawiki deployments; a simple example mediawiki deployment with just mediawiki and MySQL; and a load balanced deployment with HAProxy and memcached, designed to be horizontally scalable.

  •  A new bundle from import.io allows their SaaS platform to be instantly integrated inside Juju. Navigate to any website using the import.io browser, template the data and then test your crawl. Finally, use the import.io charm to crawl your data directly into ElasticSearch.
  • Instant Security: Syncope + PostgreSQL, developed by Tirasa, is a bundle providing Apache Syncope with the internal storage up and running on PostreSQL. Apache Syncope is an open source system for managing digital identities in enterprise environments.

  • Instant Enterprise Solutions: Credativ, experts in Open Source consultancy, are showing with their OpenERP bundle how any enterprise can instantly deploy an enterprise resource planning solution.

  • Instant High Performance Computing: HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is Open Source and can now be instantly deployed via Juju.

Francesco Chicchiriccò, CEO Tirasa / VP Apache Syncope comments; “The immediate availability of an Apache Syncope Juju bundle dramatically shortens the product evaluation process and encourages adoption. With this additional facility to get started with Open Source Identity Management, we hope to increase the deployments of Apache Syncope in any environment.”

 

Bundles for developers:

These bundles provide ‘hello world’ blank applications; they are designed as templates for application developers. Simply, they provide templates with configuration options to an application:

  • Instant Django: A Django bundle with gunicorn and PostgreSQL modelled after the Django ‘Getting Started’ guide is provided for application developers.

  • Instant Rails: Two Rails bundles, one is a simple Rails/Postgres deployment, the ‘scalable’ bundle adds HAProxy, Memcached, Redis, Nagios (for monitoring), and a Logstash/Kibana (for logging), providing an application developer with an entire scalable Rails stack.

  • Instant Wildlfy (The Community JBoss): The new Wildfly bundle from Technology Blueprint, provides an out-of-the-box Wildfly application server in a standalone mode running on openjdk 7. Currently MySQL as a datasource is also supported via a MySQL relation.

Technology Blueprint, creators of the Wildfly bundle, also uses Juju to build its own cloud environments. The company’s system administrator, Saurabh Jha comments; “Juju bundles are really beneficial for programmers and system administrators. Juju saves time, efforts as well as cost. We’ve used it to create our environment on the fly. All we need is a quick command and the whole setup gets ready automatically. No more waiting for installing and starting those heavy applications/servers manually; a bundle takes care of that for us. We can code, deploy and host our application and when we don’t need it, we can just destroy the environment. It’s that easy.”

You can browse and discover all new bundles on jujucharms.com.

Our entire ecosystem is hard at work too, charming up their applications and creating bundles around them. Upcoming bundles to look forward to include a GNU Cobol bundle, which will enable instant legacy integration, a telecom bundle to instantly deploy and integrate Project Clearwater – an open source IMS, and many others. For sure you have some ideas about a bundle that gives an instant solution to some common problems. It has never been easier to see your ideas turn into reality.

==

If you would like to create your own charm or bundle, here is how to get started: http://developer.ubuntu.com/cloud/ or see a video about Charm Bundles:  https://www.youtube.com/watch?v=eYpnQI6GZTA.

And if you’ve never used Juju before, here is an excellent series of blog posts that will guide you through spinning up a simple environment on AWS: http://insights.ubuntu.com/resources/article/deploying-web-applications-using-juju-part-33/.

Need help or advice? The Juju community is here to assist https://juju.ubuntu.com/community.

Finally, for the more technically-minded, here is a slightly more geeky take on things by Canonical’s Rick Harding, including a video walkthrough of Quickstart.

Read more
Prakash Advani

 Google is currently in the best position to challenge Amazon because they have the engineering culture and technical abilities to release some really innovative features. IBM has bought into some excellent infrastructure at Softlayer but still has to prove its cloud engineering capabilities.

Amazon has set the standard for how we expect cloud infrastructure to behave, but Google doesn’t conform to these standards in some surprising ways. So, if you’re looking at Google Cloud, here are some things you need to be aware of.

Read More: http://gigaom.com/2014/03/02/5-things-you-probably-dont-know-about-google-cloud/

Read more
Prakash Advani

Demand for people with Linux skills is increasing, a trend that appears to follow a shift in server sales.

Cloud infrastructure, including Amazon Web Service, is largely Linux based, and cloud services’ overall growth is increasing Linux server deployments. As many as 30% of all servers shipped this year will be cloud services providers, according to research firm IDC.

This shift may be contributing to Linux hiring trends reported by the Linux Foundation and IT careers website Dice, in a report released Wednesday. The report states that 77% of hiring managers have put hiring Linux talent on their list of priorities, up from 70% a year ago.

Read More: http://www.computerworld.in/news/demand-for-linux-skills-rises

Read more
Mark Baker

Two of the most frequently asked questions about Ubuntu and Canonical are:

* So, just how do you make money when Ubuntu is free?

and

* Ubuntu is great for developers, but is it really suitable for ‘enterprise use’?

We’re trying to do things differently, so we’re not surprised by these questions. What many people hear from other successful open source companies seems to narrow thinking about the value chain and open source economics.

So lets try and explain the answers to these questions, what we are doing and why Ubuntu has a model better suited for business in 2014 than that of legacy linux. Six years ago we made the decision to base our strategy for Ubuntu Server around cloud and scale out computing. We worked hard to make Ubuntu a great instance on Amazon EC2, which, at the time was just getting going. We created technologies such as Cloud-init to handle initialisations of a cloud image. We streamlined the base Ubuntu OS image to create a fast, lightweight base for users and developers to build upon. And very importantly, we doubled down on our model of releasing to a cadence (every six months) and giving developers access to the latest technologies quickly and easily.

The result? It worked. Ubiquity has spoken and Ubuntu is now the most popular operating system in cloud – it’s number one on AWS, the leading Linux on Azure, dominates DigitalOcean and is first choice on most other public clouds. Ubuntu is also w3tech’s web operating system of the year and the Linux platform showing the fastest growth for online infrastructure whilst most others are decline. In 2012 and 2013 we saw Ubuntu and Ubuntu OpenStack being chosen by large financial service organisations and global telcos for their infrastructure. Big name web scale innovators like Snapchat, Instagram, Uber, Quora, Hailo and Hipchat among others have all chosen Ubuntu as their standard infrastructure platform. We see Ubuntu leading the charge as the platform for software defined networking, scale out storage, platform as a service and OpenStack infrastructure. In fact, a recent OpenStack Foundation survey revealed that 55% respondents are running Ubuntu on OpenStack – over double that of its nearest competitor. If you measure success by adoption, then Ubuntu is certainly winning the market for next generation, scale out workloads.

However, many measure business success in monetary terms and as one industry pundit often reminds us, “a large percentage of a market that pays zero dollars is still zero dollars”. So, lets come back to the first question: How do you make money when your product is freely available? Ubiquity creates many opportunities for revenue. It can be from paid for, value added tools to help manage security and compliance for customers that care about those things. It can be from commercial agreements with cloud providers and it can be via the product being an optimised embedded component of a cloud solution being delivered by OEMs. Truth is, Canonical is pursuing all of the models above and we are doing well out of it.

As for Enterprise use, Enterprises are now really starting to understand that new, high tech companies are operating their IT infrastructure in radically different ways to them. Some high tech companies are able to scale to 1 Billion users 24x7x365 with less than 100 staff and frugal IT budgets and Enterprises crave some of that efficiency in their infrastructure. So whilst Ubuntu might not be suitable for use in an enterprise set on legacy Linux thinking, it is very much where forward thinking enterprises are headed to stay ahead of the game.

So, the basic values of of Ubuntu Server: freely available, provide developers access to the latest technology through a regular cadence of releases and optimise for cloud and scale out have been in place for years. Both adoption and revenue confirm it is the right strategy long term. Enterprises are evolving and starting to adopt Ubuntu and the model of restricting access to bits unless money is paid is now drawing to a close. Others are begrudgingly starting to accept this and trying to evolve their business models to compete with the momentum of Ubuntu.

We welcome it, after all, where is the fun in winning if you have no one to beat?

Read more
Prakash

GoGrid CEO John Keagy says if an organization wants to use a true open source database, like MongoDB, Basho’s Riak, Hadoop or Cassandra, Amazon is not the place to go.

“We want to be an open source alternative,” he says. “If you’re not worried about lock-in then use (AWS). If you’re an enterprise that wants to be able to scale indefinitely and have a flexible architecture then you should identify those needs early and embrace an open source architecture.”

Read More: http://www.computerworld.in/news/gogrid-wants-to-be-your-open-source-alternative-to-amazon’s-cloud-databases

 

Read more
Prakash

PayPal has spoken publicly and regularly about its private OpenStack implementation and recently said that 20 percent of its infrastructure runs on OpenStack.

But it’s only a matter of time before PayPal starts running some of its operations on public clouds, said James Barrese, CTO of PayPal.

“We have a few small apps that aren’t financial related where we’re doing experiments on the public cloud,” he said. “We’re not using it in a way that’s a seamless hybrid because we’re a financial system and have very stringent security requirements.”

Read More: http://www.itworld.com/cloud-computing/400964/private-cloud-poster-child-paypal-experimenting-public-cloud

Read more
Mark Baker

It is with great pride that we saw Ubuntu winning W3tech’s Operating System of the year award.

w3techs_Jan2014

For those of us that work on Ubuntu, increased adoption is one of the most satisfying results of our work and is the best measure of the if we are doing the right thing or not. What is most significant about this though, as is highlighted above, this is the third year running that Ubuntu has won the award. The reasoning is fairly simple: the growth of Ubuntu as a platform for online infrastructure has far outstripped that of other operating systems.

w3techs_last3_yrs

In fact, over the last three years only two Linux operating systems showed any growth at all – Debian and Ubuntu, although Gentoo had some traction in 2013.

Ubuntu overtaking CentOS was the most significant change in 2013 and our popularity continues to grow whilst many other decline. Many of the notable web properties of 2013 are confirmed Ubuntu users: Snapchat, Uber, Instagram, Buzzfeed, Hailo, Netflix etc…Developers at fast thinking, innovative companies love Ubuntu for its flexibility and the ability to get the latest frameworks up and running quickly and easily on cloud on or bare metal.

As observers of the industry will know, tech used in Silicon Valley startups quickly filters through to more traditional Enterprises. With the launch of Ubuntu 14.04 LTS in April, Ubuntu is set for continued greatness this year as more and more businesses seek the agility and innovation shown by many of the hot tech properties. It will be fun trying to make it happen too.

Read about the w3tech awards at:

http://w3techs.com/blog/entry/web_technologies_of_the_year_2013

Images courtesy of w3techs.com

Read more