We are pleased to announce that Ubuntu 12.04 LTS, 14.04 LTS, and 14.10 are now in beta on Google Compute Engine [1, 2, 3].
These images support both the traditional user-data as well the Google Compute Engine startup scripts. We have included the Google Cloud SDK, pre-installed as well. Users coming from other Clouds can expect to have the same great experience as on other clouds, while enjoying the features of Google Compute Engine.
From an engineering perspective, a lot of us are excited to see this launch. While we don't expect too many rough edges, it is a beta, so feedback is welcome. Please file bugs or join us in #ubuntu-server on Freenode to report any issues (ping me, utlemming, rcj or Odd_Bloke).
Finally, I wanted to thank those that have helped on this project. Launching a cloud is not an easy engineering task. You have have build infrastructure to support the new cloud, create tooling to build and publish, write QA stacks, and do packaging work. All of this spans multiple teams and disciplines. The support from Google and Canonical's Foundations and Kernel teams have been instrumental in this launch, as well the engineers on the Certified Public Cloud team.
Today marks 10 years of Ubuntu and the release of the 21st version. That is an incredible milestone and one which is worthy of reflection and celebration. I am fortunate enough to be spending the day at our devices sprint with 200+ of the folks that have helped make this possible. There are of course hundreds of others in Canonical and thousands in the community who have helped as well. The atmosphere here includes a lot of reminiscing about the early days and re-telling of the funny stories, and there is a palpable excitement in the air about the future. That same excitement was present at a Canonical Cloud Summit in Brussels last week.
The team here is closing in on shipping our first phone, marking a new era in Ubuntu’s history. There has been excellent work recently to close bugs and improve quality, and our partner BQ is as pleased with the results as we are. We are on the home stretch to this milestone, and are still on track to have Ubuntu phones in the market this year. Further, there is an impressive array of further announcements and phones lined up for 2015.
But of course that’s not all we do – the Ubuntu team and community continue to put out rock solid, high quality Ubuntu desktop releases like clockwork – the 21st of which will be released today. And with the same precision, our PC OEM team continues to make that great work available on a pre-installed basis on millions of PCs across hundreds of machine configurations. That’s an unparalleled achievement, and we really have changed the landscape of Linux and open source over the last decade. The impact of Ubuntu can be seen in countless ways – from the individuals, schools, and enterprises who now use Ubuntu; to proliferation of Codes of Conduct in open source communities; to the acceptance of faster (and near continuous) release cycles for operating systems; to the unique company/community collaboration that makes Ubuntu possible; to the vast number of developers who have now grown up with Ubuntu and in an open source world; to the many, many, many technical innovations to come out of Ubuntu, from single-CD installation in years past to the more recent work on image-based updates.
Ubuntu Server also sprang from our early desktop roots, and has now grown into the leading solution for scale out computing. Ubuntu and our suite of cloud products and services is the premier choice for any customer or partner looking to operate at scale, and it is indeed a “scale-out” world. From easy to consume Ubuntu images on public clouds; to managed cloud infrastructure via BootStack; to standard on-premise, self-managed clouds via Ubuntu OpenStack; to instant solutions delivered on any substrate via Juju, we are the leaders in a highly competitive, dynamic space. The agility, reliability and superior execution that have brought us to today’s milestone remains a critical competency for our cloud team. And as we release Ubuntu 14.10 today, which includes the latest OpenStack, new versions of our tooling such as MaaS and Juju, and initial versions of scale-out solutions for big data and Cloud Foundry, we build on a ten year history of “firsts”.
All Ubuntu releases seem to have their own personality, and Utopic is a fitting way to commemorate the realisation of a decade of vision, hard work and collaboration. We are poised on the edge of a very different decade in Canonical’s history, one in which we’ll carry forward the applicable successes and patterns, but will also forge a new path in the twin worlds of converged devices and scale-out computing. Thanks to everyone who has contributed to the journey thus far. Now, on to Vivid and the next ten years!Read more
The following is an update on Ubuntu’s response to the latest Internet emergency security issue, POODLE (CVE-2014-3566), in combination with an
SSLv3 downgrade vulnerability.
“SSL 3.0 is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0, TLS 1.1, and TLS 1.2, many TLS implementations remain backwards compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.” –https://www.openssl.org/~bodo/ssl-poodle.pdf
A vulnerability was discovered that affects the protocol negotiation between browsers and HTTP servers, where a man-in-the-middle (MITM) attacker is able trigger a protocol downgrade (ie, force downgrade to SSLv3, CVE to be assigned). Additionally, a new attack was discovered against the CBC block cipher used in SSLv3 (POODLE, CVE-2014-3566). Because of this new weakness in the CBC block cipher and the known weaknesses in the RC4 stream cipher (both used with SSLv3), attackers who successfully downgrade the victim’s connection to SSLv3 can now exploit the weaknesses of these ciphers to ascertain the plaintext of portions of the connection through brute force attacks. For example, an attacker who is able to manipulate the encrypted connection is able to steal HTTP cookies. Note, the protocol downgrade vulnerability exists in web browsers and is not implemented in the ssl libraries. Therefore, the downgrade attack is currently known to exist only for HTTP.
OpenSSL will be updated to guard against illegal protocol negotiation downgrades (TLS_FALLBACK_SCSV). When the server and client are updated to use TLS_FALLBACK_SCSV, the protocol cannot be downgraded to below the highest protocol that is supported between the two (so if the client and the server both support TLS 1.2, SSLv3 cannot be used even if the server offers SSLv3).
The recommended course of action is ultimately for sites to disable SSLv3 on their servers, and for browsers to disable SSLv3 by default since the SSLv3 protocol is known to be broken. However, it will take time for sites to disable SSLv3, and some sites will choose not to, in order to support legacy browsers (eg, IE6). As a result, immediately disabling SSLv3 in Ubuntu in the openssl libraries, in servers or in browsers, will break sites that still rely on SSLv3.
Unfortunately, this issue cannot be addressed in a single USN because this is a vulnerability in a protocol, and the Internet must respond accordingly (ie SSLv3 must be disabled everywhere). Ubuntu’s response provides a path forward to transition users towards safe defaults:
Ubuntu currently will not:
For more information on Ubuntu security notices that affect the current supported releases of Ubuntu, or to report a security vulnerability in an Ubuntu package, please visit http://www.ubuntu.com/usn/.
It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.
This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.
I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!
While you can run
adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git (
$checkout_dir/run-from-checkout), and other than that you only need
python-novaclient and the usual
$OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.
The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with
adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with
adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.
To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script.
adt-buildvm-ubuntu-cloud now uses this, you shold use it with
vmdebootstrap --customize (see
adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image,
nova boot an instance from it, run
adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with
nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds.
Thus something like this should be run daily (pick the base images from
$ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img
This will generate
Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):
$ cat pkglist apport apt aptdaemon apache2 autopilot-gtk autopkgtest binutils chromium-browser cups dbus gem2deb glib-networking glib2.0 gvfs kcalc keystone libnih libreoffice lintian lxc mysql-5.5 network-manager nut ofono-phonesim php5 postgresql-9.4 python3.4 sbuild shotwell systemd-shim ubiquity ubuntu-drivers-common udisks2 upstart
Now I created a shell wrapper around
adt-run to work with the
parallel tool and to keep the invocation in a single place:
$ cat adt-run-nova #!/bin/sh -e adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \ --flavor m1.small --image adt-utopic-i386 \ --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5
Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments.
--image is the image name we built above,
--flavor should use a suitable memory/disk size from
nova flavor-list and
--net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack.
Finally, let’ run the packages from above with using ten VMs in parallel:
parallel -j 10 ./adt-run-nova -- $(< pkglist)
After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.
Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.
Happy testing!Read more
An independent survey of 200 UK-based CIOs has revealed that they are only using about half of the cloud capacity they’ve bought and paid for, and that 90 percent of them see over-provisioning as a necessary evil.
Cloud provider ElasticHosts, which commissioned the survey, says: “Essentially, bad habits like over-provisioning and sacrificing peak performance are being carried from the on-premise world into the cloud, partly because people are willing to accept these limitations.”
If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components. Real, rich, production-quality OpenStack! Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium. Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.
For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day. Each of these builds is given a serial in the form of YYYYMMDD.
While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.
When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.
What if we build the Cloud Images when the package set changes?
With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.
Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.
Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images. You can view my slides here (PDF), or you can read on below. Enjoy!
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1
|RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.|
As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:
Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.
To see what Ars think of those, you should read the article.
I definitely echo Lee’s closing statement:
I wish my closet had an Orange Box in it. That thing is hella cool.
The company has pledged to invest $1 billion in open cloud products and services over the next two years, along with community-driven, open-source cloud technologies.
“Just as the community spread the adoption of Linux in the enterprise, we believe OpenStack will do the same for the cloud,” said Hewlett-Packard CEO and President Meg Whitman, in a webcast announcing Helion Tuesday.
This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.
We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.
Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.
Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.
The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.
There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.
That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.
Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.Read more
Today is a big day for Ubuntu and a big day for cloud computing: Ubuntu 14.04 LTS is released. Everyone involved with Ubuntu can’t help but be impressed and stirred about the significance of Ubuntu 14.04 LTS.
We are impressed because Ubuntu is gaining extensive traction outside of the tech luminaries such as Netflix, Snapchat and wider DevOP community; it is being adopted by mainstream enterprises such as BestBuy. Ubuntu is dominant in public cloud with typically 60% market share of Linux workloads in the major cloud providers such as Amazon, Azure and Joyent. Ubuntu Server also is the fastest growing platform for scale out web computing having overtaken CentOS some six months ago. So Ubuntu server is growing up and we are proud of what it has become. We are stirred up by how the adoption of Ubuntu, coupled with the adoption of cloud and scale out computing is set grow enormously as it fast becomes an ‘enterprise’ technology.
Recently 70% of CIOs stated that they are going to change their technology and sourcing relationships within the next two or three years. This is in large part due to their planned transition to cloud, be it on premise using technologies such as Ubuntu OpenStack, in a public cloud or, most commonly, using combinations of both. Since the beginning of Ubuntu Server we have been preparing for this time, the time when a wholesale technology infrastructure change occurs and Ubuntu 14.04 arrives just as the change is starting to accelerate beyond the early adopters and technology companies. Enterprises now moving parts of their infrastructure to cloud can choose the technology best suited for the job: Ubuntu 14.04 LTS:
Ubuntu Server 14.04 LTS at a glance
Based on version 3.13 of the Linux kernel
Includes the Icehouse release of OpenStack
Both Ubuntu Server 14.04 LTS and OpenStack are supported until April 2019
Includes MAAS for automated hardware provisioning
Includes Juju for fast service deployment of 100+ common scale out applications such as MongoDB, Hadoop, node.js, Cloudfoundry, LAMP stack and Elastic Search
Ceph Firefly support
Docker included & Docker’s own repository now populated with official Ubuntu 14.04 images
Optimised Ubuntu 14.04 images certified for use on all leading public cloud platforms – Amazon AWS, Microsoft Azure, Joyent Cloud, HP Cloud, Rackspace Cloud, CloudSigma and many others.
Runs on key hardware architectures: x86, x64, Avoton, ARM64, POWER Systems
The advent of OpenStack, the switch to scale out computing and the move towards public cloud providers presents a perfect storm out of which Ubuntu is set to emerge the technology used ubiquitously for the next decade. That is why we are impressed and stirred by Ubuntu 14.04. We hope you are too. Download 14.04 LTS hereRead more
Canonical and Cisco share a common vision around the direction of the cloud and the application-driven datacentre. We believe both need to quickly respond to an application’s needs and be highly elastic.
Cisco’s announcement of an open approach with OpFlex is a great step towards to an application centric cloud and datacenter. Cisco Application Centric Infrastructure policy engine (APIC) makes the policy model APIs and documentation open to the marketplace. These policies will be freely usable by an emerging ecosystem that is adopting an open policy model. Canonical and Cisco are aligned in efforts to leverage open models to accelerate innovation in the cloud and datacenter.
Cisco’s ACI operational model will drive multi-vendor innovation, bringing greater agility, simplicity and scale. Opening the ACI policy engine (APIC) to multi-vendor infrastructure is a positive step to open source cloud and datacenter operations. This aligns with the Canonical open strategy for the cloud and datacenter. Canonical is a firm believer in a strong and open ecosystem. We take great pride that you can build an OpenStack cloud on Ubuntu from all the major participants in the OpenStack ecosystem (Cisco, Dell, HP, Mirantis and more). The latest OpenStack Foundation survey of production OpenStack deployments found 55% of them on Ubuntu – that’s over twice the number of deployments than the next operating system. We believe a healthy and open ecosystem is the best way to ensure great choice for our collective customers.
Canonical is pleased to be a member of Cisco’s OpFlex ecosystem. Canonical and Cisco intend to collaborate in the standards process. As the standard is finalised, Cisco and Canonical will integrate their company’s technology to improve the customer experience. This includes alignment of Canonical’s Juju and KVM with Cisco’s ACI model.
Cisco and Canonical believe there are opportunities to leverage Ubuntu, Ubuntu OpenStack and Juju, Canonical’s service orchestration, with Cisco’s ACI policy-based model. We see many companies moving to Ubuntu and Ubuntu OpenStack that use Cisco network and compute technology. The collaboration of Canonical with Cisco towards an application centric cloud and datacenter is an opportunity for our mutual customers.Read more
It is pretty well known that most of the OpenStack clouds running in production today are based on Ubuntu. Companies like Comcast, NTT, Deutsche Telekom, Bloomberg and HP all trust Ubuntu Server as the right platform to run OpenStack. A fair proportion of the Ubuntu OpenStack users out there also engage Canonical to provide them with technical support, not only for Ubuntu Server but OpenStack itself. Canonical provides full Enterprise class support for both Ubuntu and OpenStack and has been supporting some of the largest, most demanding customers and their OpenStack clouds since early 2011. This gives us a unique insight into what it takes to support a production OpenStack environment.
For example, in the period January 1st 2014 to end of March, Canonical processed hundreds of OpenStack support tickets averaging over 100 per month. During that time we closed 92 bugs whilst customers opened 99 new ones. These are bugs found by real customers running real clouds and we are pleased that they are brought to our attention, especially the hard ones as it helps makes OpenStack better for everyone.
The type of support tickets we see is interesting as core OpenStack itself only represents about 12% of the support traffic. The majority of problems arise between the interaction of OpenStack, the operating system and other infrastructure components – fibre channel drivers used by nova volume, or, QEMU/libvirt issues during upgrades for example. Fixing these problems requires deep expertise Ubuntu as well as OpenStack which is why customers choose Canonical to support them.
In my next post I’ll dig a little deeper into supporting OpenStack and how this contributes to the OpenStack ecosystem.Read more
A few years ago, the cloud team at Canonical decided that the future of cloud computing lies not only in what clouds are built on, but what runs on it, and how quickly, securely, and efficiently those services can be managed. This is when Juju was born; our service orchestration tool built for the cloud and inspired by the way IT architects visualise their infrastructure: boxes representing services, connected by lines representing interfaces or relationships. Juju’s GUI simplifies searching, dragging and dropping a ‘Charm’ into a canvas to deploy services instantly.
Today, we are announcing two new features for DevOps seeking ever faster and easier ways of deploying scalable infrastructure. The first are Juju Charm bundles that allow you to deploy an entire cloud environment with one click. Secondly we are announcing Quickstart which spins up an entire Juju environment and deploys the necessary services to run Juju, all with one command. Juju Bundles and Quickstart are powerful tools on their own but offer enormous value comes when they are used together: Quickstart can be combined with bundles to rapidly launch Juju, start-up the environment, and deploy an entire application infrastructure, all in one action.
Already there are several bundles available that cover key technology areas: security, big data, SaaS, back office workloads, web servers, content management and the integration of legacy systems. New Charm bundles available today include:
Bundles for complex services:
Instant Hadoop: The Hadoop cluster bundle is a 7-node starter cluster designed to deploy Hadoop in a way that’s easily scalable. The deployment has been tested with up to 2,000 nodes on AWS.
Instant Mongo: Mongodb, a 13-node (over three shards) starter MongoDB cluster and has the capability to horizontally scale all of the three shards.
Instant Wiki: Two Mediawiki deployments; a simple example mediawiki deployment with just mediawiki and MySQL; and a load balanced deployment with HAProxy and memcached, designed to be horizontally scalable.
Instant Security: Syncope + PostgreSQL, developed by Tirasa, is a bundle providing Apache Syncope with the internal storage up and running on PostreSQL. Apache Syncope is an open source system for managing digital identities in enterprise environments.
Instant High Performance Computing: HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is Open Source and can now be instantly deployed via Juju.
Francesco Chicchiriccò, CEO Tirasa / VP Apache Syncope comments; “The immediate availability of an Apache Syncope Juju bundle dramatically shortens the product evaluation process and encourages adoption. With this additional facility to get started with Open Source Identity Management, we hope to increase the deployments of Apache Syncope in any environment.”
Bundles for developers:
These bundles provide ‘hello world’ blank applications; they are designed as templates for application developers. Simply, they provide templates with configuration options to an application:
Instant Django: A Django bundle with gunicorn and PostgreSQL modelled after the Django ‘Getting Started’ guide is provided for application developers.
Instant Rails: Two Rails bundles, one is a simple Rails/Postgres deployment, the ‘scalable’ bundle adds HAProxy, Memcached, Redis, Nagios (for monitoring), and a Logstash/Kibana (for logging), providing an application developer with an entire scalable Rails stack.
Instant Wildlfy (The Community JBoss): The new Wildfly bundle from Technology Blueprint, provides an out-of-the-box Wildfly application server in a standalone mode running on openjdk 7. Currently MySQL as a datasource is also supported via a MySQL relation.
Technology Blueprint, creators of the Wildfly bundle, also uses Juju to build its own cloud environments. The company’s system administrator, Saurabh Jha comments; “Juju bundles are really beneficial for programmers and system administrators. Juju saves time, efforts as well as cost. We’ve used it to create our environment on the fly. All we need is a quick command and the whole setup gets ready automatically. No more waiting for installing and starting those heavy applications/servers manually; a bundle takes care of that for us. We can code, deploy and host our application and when we don’t need it, we can just destroy the environment. It’s that easy.”
You can browse and discover all new bundles on jujucharms.com.
Our entire ecosystem is hard at work too, charming up their applications and creating bundles around them. Upcoming bundles to look forward to include a GNU Cobol bundle, which will enable instant legacy integration, a telecom bundle to instantly deploy and integrate Project Clearwater – an open source IMS, and many others. For sure you have some ideas about a bundle that gives an instant solution to some common problems. It has never been easier to see your ideas turn into reality.
And if you’ve never used Juju before, here is an excellent series of blog posts that will guide you through spinning up a simple environment on AWS: http://insights.ubuntu.com/resources/article/deploying-web-applications-using-juju-part-33/.
Need help or advice? The Juju community is here to assist https://juju.ubuntu.com/community.
Finally, for the more technically-minded, here is a slightly more geeky take on things by Canonical’s Rick Harding, including a video walkthrough of Quickstart.Read more
Google is currently in the best position to challenge Amazon because they have the engineering culture and technical abilities to release some really innovative features. IBM has bought into some excellent infrastructure at Softlayer but still has to prove its cloud engineering capabilities.
Amazon has set the standard for how we expect cloud infrastructure to behave, but Google doesn’t conform to these standards in some surprising ways. So, if you’re looking at Google Cloud, here are some things you need to be aware of.
Demand for people with Linux skills is increasing, a trend that appears to follow a shift in server sales.
Cloud infrastructure, including Amazon Web Service, is largely Linux based, and cloud services’ overall growth is increasing Linux server deployments. As many as 30% of all servers shipped this year will be cloud services providers, according to research firm IDC.
This shift may be contributing to Linux hiring trends reported by the Linux Foundation and IT careers website Dice, in a report released Wednesday. The report states that 77% of hiring managers have put hiring Linux talent on their list of priorities, up from 70% a year ago.
Two of the most frequently asked questions about Ubuntu and Canonical are:
* So, just how do you make money when Ubuntu is free?
* Ubuntu is great for developers, but is it really suitable for ‘enterprise use’?
We’re trying to do things differently, so we’re not surprised by these questions. What many people hear from other successful open source companies seems to narrow thinking about the value chain and open source economics.
So lets try and explain the answers to these questions, what we are doing and why Ubuntu has a model better suited for business in 2014 than that of legacy linux. Six years ago we made the decision to base our strategy for Ubuntu Server around cloud and scale out computing. We worked hard to make Ubuntu a great instance on Amazon EC2, which, at the time was just getting going. We created technologies such as Cloud-init to handle initialisations of a cloud image. We streamlined the base Ubuntu OS image to create a fast, lightweight base for users and developers to build upon. And very importantly, we doubled down on our model of releasing to a cadence (every six months) and giving developers access to the latest technologies quickly and easily.
The result? It worked. Ubiquity has spoken and Ubuntu is now the most popular operating system in cloud – it’s number one on AWS, the leading Linux on Azure, dominates DigitalOcean and is first choice on most other public clouds. Ubuntu is also w3tech’s web operating system of the year and the Linux platform showing the fastest growth for online infrastructure whilst most others are decline. In 2012 and 2013 we saw Ubuntu and Ubuntu OpenStack being chosen by large financial service organisations and global telcos for their infrastructure. Big name web scale innovators like Snapchat, Instagram, Uber, Quora, Hailo and Hipchat among others have all chosen Ubuntu as their standard infrastructure platform. We see Ubuntu leading the charge as the platform for software defined networking, scale out storage, platform as a service and OpenStack infrastructure. In fact, a recent OpenStack Foundation survey revealed that 55% respondents are running Ubuntu on OpenStack – over double that of its nearest competitor. If you measure success by adoption, then Ubuntu is certainly winning the market for next generation, scale out workloads.
However, many measure business success in monetary terms and as one industry pundit often reminds us, “a large percentage of a market that pays zero dollars is still zero dollars”. So, lets come back to the first question: How do you make money when your product is freely available? Ubiquity creates many opportunities for revenue. It can be from paid for, value added tools to help manage security and compliance for customers that care about those things. It can be from commercial agreements with cloud providers and it can be via the product being an optimised embedded component of a cloud solution being delivered by OEMs. Truth is, Canonical is pursuing all of the models above and we are doing well out of it.
As for Enterprise use, Enterprises are now really starting to understand that new, high tech companies are operating their IT infrastructure in radically different ways to them. Some high tech companies are able to scale to 1 Billion users 24x7x365 with less than 100 staff and frugal IT budgets and Enterprises crave some of that efficiency in their infrastructure. So whilst Ubuntu might not be suitable for use in an enterprise set on legacy Linux thinking, it is very much where forward thinking enterprises are headed to stay ahead of the game.
So, the basic values of of Ubuntu Server: freely available, provide developers access to the latest technology through a regular cadence of releases and optimise for cloud and scale out have been in place for years. Both adoption and revenue confirm it is the right strategy long term. Enterprises are evolving and starting to adopt Ubuntu and the model of restricting access to bits unless money is paid is now drawing to a close. Others are begrudgingly starting to accept this and trying to evolve their business models to compete with the momentum of Ubuntu.
We welcome it, after all, where is the fun in winning if you have no one to beat?Read more
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.