Canonical Voices

Posts tagged with 'cloud'

Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
David Murphy (schwuk)

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can't help but have your attention grabbed by it!

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.


  1. Or make one out of wood like my colleague Gavin did! 

Read more
Prakash Advani

The company has pledged to invest $1 billion in open cloud products and services over the next two years, along with community-driven, open-source cloud technologies.

“Just as the community spread the adoption of Linux in the enterprise, we believe OpenStack will do the same for the cloud,” said Hewlett-Packard CEO and President Meg Whitman, in a webcast announcing Helion Tuesday.

Read More

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Mark Baker

Ubuntu 14.04 LTS

Today is a big day for Ubuntu and a big day for cloud computing: Ubuntu 14.04 LTS is released. Everyone involved with Ubuntu can’t help but be impressed and stirred about the significance of Ubuntu 14.04 LTS.

We are impressed because Ubuntu is gaining extensive traction outside of the tech luminaries such as Netflix, Snapchat and wider DevOP community; it is being adopted by mainstream enterprises such as BestBuy. Ubuntu is dominant in public cloud with typically 60% market share of Linux workloads in the major cloud providers such as Amazon, Azure and Joyent. Ubuntu Server also is the fastest growing platform for scale out web computing having overtaken CentOS some six months ago. So Ubuntu server is growing up and we are proud of what it has become. We are stirred up by how the adoption of Ubuntu, coupled with the adoption of cloud and scale out computing is set grow enormously as it fast becomes an ‘enterprise’ technology.

Recently 70% of CIOs stated that they are going to change their technology and sourcing relationships within the next two or three years. This is in large part due to their planned transition to cloud, be it on premise using technologies such as Ubuntu OpenStack, in a public cloud or, most commonly, using combinations of both. Since the beginning of Ubuntu Server we have been preparing for this time, the time when a wholesale technology infrastructure change occurs and Ubuntu 14.04 arrives just as the change is starting to accelerate beyond the early adopters and technology companies. Enterprises now moving parts of their infrastructure to cloud can choose the technology best suited for the job: Ubuntu 14.04 LTS:

Ubuntu Server 14.04 LTS at a glance

  • Based on version 3.13 of the Linux kernel

  • Includes the Icehouse release of OpenStack

  • Both Ubuntu Server 14.04 LTS and OpenStack are supported until April 2019

  • Includes MAAS for automated hardware provisioning

  • Includes Juju for fast service deployment of 100+ common scale out applications such as MongoDB, Hadoop, node.js, Cloudfoundry, LAMP stack and Elastic Search

  • Ceph Firefly support

  • Openvswitch  2.0.x

  • Docker included & Docker’s own repository now populated with official     Ubuntu 14.04 images

  • Optimised Ubuntu 14.04 images certified for use on all leading public cloud     platforms – Amazon AWS, Microsoft Azure, Joyent Cloud, HP Cloud, Rackspace Cloud, CloudSigma and many others.

  • Runs on key hardware architectures: x86, x64,  Avoton, ARM64, POWER Systems

  • 50+ systems certified at launch from leading hardware vendors such as HP, Dell, IBM, Cisco and SeaMicro.

The advent of OpenStack, the switch to scale out computing and the move towards public cloud providers presents a perfect storm out of which Ubuntu is set to emerge the technology used ubiquitously for the next decade. That is why we are impressed and stirred by Ubuntu 14.04. We hope you are too. Download 14.04 LTS here

Read more
John Zannos

Canonical and Cisco share a common vision around the direction of the cloud and the application-driven datacentre.  We believe both need to quickly respond to an application’s needs and be highly elastic.

Cisco’s announcement of an open approach with OpFlex is a great step towards to an application centric cloud and datacenter. Cisco Application Centric Infrastructure policy engine (APIC) makes the policy model APIs and documentation open to the marketplace. These policies will be freely usable by an emerging ecosystem that is adopting an open policy model. Canonical and Cisco are aligned in efforts to leverage open models to accelerate innovation in the cloud and datacenter.

Cisco’s ACI operational model will drive multi-vendor innovation, bringing greater agility, simplicity and scale.  Opening the ACI policy engine (APIC) to multi-vendor infrastructure is a positive step to open source cloud and datacenter operations.  This aligns with the Canonical open strategy for the cloud and datacenter.  Canonical is a firm believer in a strong and open ecosystem.  We take great pride that you can build an OpenStack cloud on Ubuntu from all the major participants in the OpenStack ecosystem (Cisco, Dell, HP, Mirantis and more).  The latest OpenStack Foundation survey of production OpenStack deployments found 55% of them on Ubuntu – that’s over twice the number of deployments than the next operating system. We believe a healthy and open ecosystem is the best way to ensure great choice for our collective customers.

Canonical is pleased to be a member of Cisco’s OpFlex ecosystem.  Canonical and Cisco intend to collaborate in the standards process. As the standard is finalised, Cisco and Canonical will integrate their company’s technology to improve the customer experience. This includes alignment of Canonical’s Juju and KVM with Cisco’s ACI model.

Cisco and Canonical believe there are opportunities to leverage Ubuntu, Ubuntu OpenStack and Juju, Canonical’s service orchestration, with Cisco’s ACI policy-based model.  We see many companies moving to Ubuntu and Ubuntu OpenStack that use Cisco network and compute technology. The collaboration of Canonical with Cisco towards an application centric cloud and datacenter is an opportunity for our mutual customers.

Read more
Mark Baker

It is pretty well known that most of the OpenStack clouds running in production today are based on Ubuntu. Companies like Comcast, NTT, Deutsche Telekom, Bloomberg and HP all trust Ubuntu Server as the right platform to run OpenStack. A fair proportion of the Ubuntu OpenStack users out there also engage Canonical to provide them with technical support, not only for Ubuntu Server but OpenStack itself. Canonical provides full Enterprise class support for both Ubuntu and OpenStack and has been supporting some of the largest, most demanding customers and their OpenStack clouds since early 2011. This gives us a unique insight into what it takes to support a production OpenStack environment.

For example, in the period January 1st 2014 to end of March, Canonical processed hundreds of OpenStack support tickets averaging over 100 per month. During that time we closed 92 bugs whilst customers opened 99 new ones. These are bugs found by real customers running real clouds and we are pleased that they are brought to our attention, especially the hard ones as it helps makes OpenStack better for everyone.

The type of support tickets we see is interesting as core OpenStack itself only represents about 12% of the support traffic. The majority of problems arise between the interaction of OpenStack, the operating system and other infrastructure components – fibre channel drivers used by nova volume, or, QEMU/libvirt issues during upgrades for example. Fixing these problems requires deep expertise Ubuntu as well as OpenStack which is why customers choose Canonical to support them.

In my next post I’ll dig a little deeper into supporting OpenStack and how this contributes to the OpenStack ecosystem.

Read more
Sally Radwan

A few years ago, the cloud team at Canonical decided that the future of cloud computing lies not only in what clouds are built on, but what runs on it, and how quickly, securely, and efficiently those services can be managed. This is when Juju was born; our service orchestration tool built for the cloud and inspired by the way IT architects visualise their infrastructure: boxes representing services, connected by lines representing interfaces or relationships. Juju’s GUI simplifies searching, dragging and dropping a ‘Charm’ into a canvas to deploy services instantly.

Today, we are announcing two new features for DevOps seeking ever faster and easier ways of deploying scalable infrastructure. The first are Juju Charm bundles that allow you to deploy an entire cloud environment with one click. Secondly we are announcing Quickstart which spins up an entire Juju environment and deploys the necessary services to run Juju, all with one command. Juju Bundles and Quickstart are powerful tools on their own but offer enormous value comes when they are used together: Quickstart can be combined with bundles to rapidly launch Juju, start-up the environment, and deploy an entire application infrastructure, all in one action.

Already there are several bundles available that cover key technology areas: security, big data, SaaS, back office workloads, web servers, content management and the integration of legacy systems. New Charm bundles available today include:

Bundles for complex services:

  • Instant Hadoop: The Hadoop cluster bundle is a 7-node starter cluster designed to deploy Hadoop in a way that’s easily scalable. The deployment has been tested with up to 2,000 nodes on AWS.

  • Instant Mongo: Mongodb, a 13-node (over three shards) starter MongoDB cluster and has the capability to horizontally scale all of the three shards.

  • Instant Wiki: Two Mediawiki deployments; a simple example mediawiki deployment with just mediawiki and MySQL; and a load balanced deployment with HAProxy and memcached, designed to be horizontally scalable.

  •  A new bundle from import.io allows their SaaS platform to be instantly integrated inside Juju. Navigate to any website using the import.io browser, template the data and then test your crawl. Finally, use the import.io charm to crawl your data directly into ElasticSearch.
  • Instant Security: Syncope + PostgreSQL, developed by Tirasa, is a bundle providing Apache Syncope with the internal storage up and running on PostreSQL. Apache Syncope is an open source system for managing digital identities in enterprise environments.

  • Instant Enterprise Solutions: Credativ, experts in Open Source consultancy, are showing with their OpenERP bundle how any enterprise can instantly deploy an enterprise resource planning solution.

  • Instant High Performance Computing: HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is Open Source and can now be instantly deployed via Juju.

Francesco Chicchiriccò, CEO Tirasa / VP Apache Syncope comments; “The immediate availability of an Apache Syncope Juju bundle dramatically shortens the product evaluation process and encourages adoption. With this additional facility to get started with Open Source Identity Management, we hope to increase the deployments of Apache Syncope in any environment.”

 

Bundles for developers:

These bundles provide ‘hello world’ blank applications; they are designed as templates for application developers. Simply, they provide templates with configuration options to an application:

  • Instant Django: A Django bundle with gunicorn and PostgreSQL modelled after the Django ‘Getting Started’ guide is provided for application developers.

  • Instant Rails: Two Rails bundles, one is a simple Rails/Postgres deployment, the ‘scalable’ bundle adds HAProxy, Memcached, Redis, Nagios (for monitoring), and a Logstash/Kibana (for logging), providing an application developer with an entire scalable Rails stack.

  • Instant Wildlfy (The Community JBoss): The new Wildfly bundle from Technology Blueprint, provides an out-of-the-box Wildfly application server in a standalone mode running on openjdk 7. Currently MySQL as a datasource is also supported via a MySQL relation.

Technology Blueprint, creators of the Wildfly bundle, also uses Juju to build its own cloud environments. The company’s system administrator, Saurabh Jha comments; “Juju bundles are really beneficial for programmers and system administrators. Juju saves time, efforts as well as cost. We’ve used it to create our environment on the fly. All we need is a quick command and the whole setup gets ready automatically. No more waiting for installing and starting those heavy applications/servers manually; a bundle takes care of that for us. We can code, deploy and host our application and when we don’t need it, we can just destroy the environment. It’s that easy.”

You can browse and discover all new bundles on jujucharms.com.

Our entire ecosystem is hard at work too, charming up their applications and creating bundles around them. Upcoming bundles to look forward to include a GNU Cobol bundle, which will enable instant legacy integration, a telecom bundle to instantly deploy and integrate Project Clearwater – an open source IMS, and many others. For sure you have some ideas about a bundle that gives an instant solution to some common problems. It has never been easier to see your ideas turn into reality.

==

If you would like to create your own charm or bundle, here is how to get started: http://developer.ubuntu.com/cloud/ or see a video about Charm Bundles:  https://www.youtube.com/watch?v=eYpnQI6GZTA.

And if you’ve never used Juju before, here is an excellent series of blog posts that will guide you through spinning up a simple environment on AWS: http://insights.ubuntu.com/resources/article/deploying-web-applications-using-juju-part-33/.

Need help or advice? The Juju community is here to assist https://juju.ubuntu.com/community.

Finally, for the more technically-minded, here is a slightly more geeky take on things by Canonical’s Rick Harding, including a video walkthrough of Quickstart.

Read more
Prakash Advani

 Google is currently in the best position to challenge Amazon because they have the engineering culture and technical abilities to release some really innovative features. IBM has bought into some excellent infrastructure at Softlayer but still has to prove its cloud engineering capabilities.

Amazon has set the standard for how we expect cloud infrastructure to behave, but Google doesn’t conform to these standards in some surprising ways. So, if you’re looking at Google Cloud, here are some things you need to be aware of.

Read More: http://gigaom.com/2014/03/02/5-things-you-probably-dont-know-about-google-cloud/

Read more
Prakash Advani

Demand for people with Linux skills is increasing, a trend that appears to follow a shift in server sales.

Cloud infrastructure, including Amazon Web Service, is largely Linux based, and cloud services’ overall growth is increasing Linux server deployments. As many as 30% of all servers shipped this year will be cloud services providers, according to research firm IDC.

This shift may be contributing to Linux hiring trends reported by the Linux Foundation and IT careers website Dice, in a report released Wednesday. The report states that 77% of hiring managers have put hiring Linux talent on their list of priorities, up from 70% a year ago.

Read More: http://www.computerworld.in/news/demand-for-linux-skills-rises

Read more
Mark Baker

Two of the most frequently asked questions about Ubuntu and Canonical are:

* So, just how do you make money when Ubuntu is free?

and

* Ubuntu is great for developers, but is it really suitable for ‘enterprise use’?

We’re trying to do things differently, so we’re not surprised by these questions. What many people hear from other successful open source companies seems to narrow thinking about the value chain and open source economics.

So lets try and explain the answers to these questions, what we are doing and why Ubuntu has a model better suited for business in 2014 than that of legacy linux. Six years ago we made the decision to base our strategy for Ubuntu Server around cloud and scale out computing. We worked hard to make Ubuntu a great instance on Amazon EC2, which, at the time was just getting going. We created technologies such as Cloud-init to handle initialisations of a cloud image. We streamlined the base Ubuntu OS image to create a fast, lightweight base for users and developers to build upon. And very importantly, we doubled down on our model of releasing to a cadence (every six months) and giving developers access to the latest technologies quickly and easily.

The result? It worked. Ubiquity has spoken and Ubuntu is now the most popular operating system in cloud – it’s number one on AWS, the leading Linux on Azure, dominates DigitalOcean and is first choice on most other public clouds. Ubuntu is also w3tech’s web operating system of the year and the Linux platform showing the fastest growth for online infrastructure whilst most others are decline. In 2012 and 2013 we saw Ubuntu and Ubuntu OpenStack being chosen by large financial service organisations and global telcos for their infrastructure. Big name web scale innovators like Snapchat, Instagram, Uber, Quora, Hailo and Hipchat among others have all chosen Ubuntu as their standard infrastructure platform. We see Ubuntu leading the charge as the platform for software defined networking, scale out storage, platform as a service and OpenStack infrastructure. In fact, a recent OpenStack Foundation survey revealed that 55% respondents are running Ubuntu on OpenStack – over double that of its nearest competitor. If you measure success by adoption, then Ubuntu is certainly winning the market for next generation, scale out workloads.

However, many measure business success in monetary terms and as one industry pundit often reminds us, “a large percentage of a market that pays zero dollars is still zero dollars”. So, lets come back to the first question: How do you make money when your product is freely available? Ubiquity creates many opportunities for revenue. It can be from paid for, value added tools to help manage security and compliance for customers that care about those things. It can be from commercial agreements with cloud providers and it can be via the product being an optimised embedded component of a cloud solution being delivered by OEMs. Truth is, Canonical is pursuing all of the models above and we are doing well out of it.

As for Enterprise use, Enterprises are now really starting to understand that new, high tech companies are operating their IT infrastructure in radically different ways to them. Some high tech companies are able to scale to 1 Billion users 24x7x365 with less than 100 staff and frugal IT budgets and Enterprises crave some of that efficiency in their infrastructure. So whilst Ubuntu might not be suitable for use in an enterprise set on legacy Linux thinking, it is very much where forward thinking enterprises are headed to stay ahead of the game.

So, the basic values of of Ubuntu Server: freely available, provide developers access to the latest technology through a regular cadence of releases and optimise for cloud and scale out have been in place for years. Both adoption and revenue confirm it is the right strategy long term. Enterprises are evolving and starting to adopt Ubuntu and the model of restricting access to bits unless money is paid is now drawing to a close. Others are begrudgingly starting to accept this and trying to evolve their business models to compete with the momentum of Ubuntu.

We welcome it, after all, where is the fun in winning if you have no one to beat?

Read more
Prakash

GoGrid CEO John Keagy says if an organization wants to use a true open source database, like MongoDB, Basho’s Riak, Hadoop or Cassandra, Amazon is not the place to go.

“We want to be an open source alternative,” he says. “If you’re not worried about lock-in then use (AWS). If you’re an enterprise that wants to be able to scale indefinitely and have a flexible architecture then you should identify those needs early and embrace an open source architecture.”

Read More: http://www.computerworld.in/news/gogrid-wants-to-be-your-open-source-alternative-to-amazon’s-cloud-databases

 

Read more
Prakash

PayPal has spoken publicly and regularly about its private OpenStack implementation and recently said that 20 percent of its infrastructure runs on OpenStack.

But it’s only a matter of time before PayPal starts running some of its operations on public clouds, said James Barrese, CTO of PayPal.

“We have a few small apps that aren’t financial related where we’re doing experiments on the public cloud,” he said. “We’re not using it in a way that’s a seamless hybrid because we’re a financial system and have very stringent security requirements.”

Read More: http://www.itworld.com/cloud-computing/400964/private-cloud-poster-child-paypal-experimenting-public-cloud

Read more
Mark Baker

It is with great pride that we saw Ubuntu winning W3tech’s Operating System of the year award.

w3techs_Jan2014

For those of us that work on Ubuntu, increased adoption is one of the most satisfying results of our work and is the best measure of the if we are doing the right thing or not. What is most significant about this though, as is highlighted above, this is the third year running that Ubuntu has won the award. The reasoning is fairly simple: the growth of Ubuntu as a platform for online infrastructure has far outstripped that of other operating systems.

w3techs_last3_yrs

In fact, over the last three years only two Linux operating systems showed any growth at all – Debian and Ubuntu, although Gentoo had some traction in 2013.

Ubuntu overtaking CentOS was the most significant change in 2013 and our popularity continues to grow whilst many other decline. Many of the notable web properties of 2013 are confirmed Ubuntu users: Snapchat, Uber, Instagram, Buzzfeed, Hailo, Netflix etc…Developers at fast thinking, innovative companies love Ubuntu for its flexibility and the ability to get the latest frameworks up and running quickly and easily on cloud on or bare metal.

As observers of the industry will know, tech used in Silicon Valley startups quickly filters through to more traditional Enterprises. With the launch of Ubuntu 14.04 LTS in April, Ubuntu is set for continued greatness this year as more and more businesses seek the agility and innovation shown by many of the hot tech properties. It will be fun trying to make it happen too.

Read about the w3tech awards at:

http://w3techs.com/blog/entry/web_technologies_of_the_year_2013

Images courtesy of w3techs.com

Read more
Prakash

You gotta love it when one vendor helpfully announces what another vendor’s plans. That’s what apparently happened Monday when Rackspace Chairman and co-founder Graham Weston was quoted in the Wall Street Journal’s CIO blog  saying that Salesforce.com would start running OpenStack’s open-source cloud technology.

Read More: http://gigaom.com/2013/12/17/salesforce-com-will-adopt-openstack-says-rackspace/

Read more
Prakash

According to a new Gartner report, around $3.9 billion will be spent on cloud services in India from 2013 through 2017, of which $1.7 billion will be spent on software-as-a-service (SaaS). The overall public cloud services market in India is also set to grow 33.6% this year to touch $404 million, an increase of $101 million from the 2012 revenue of $303 million, said the research firm.

Read More: http://www.cxotoday.com/story/india-to-spend-39-billion-on-cloud-services-by-2017/

Read more
Prakash

OpenStack, a non-profit organization promoting open source cloud computing software, wants to increase its presence in India.

The organization has formed a three -pronged strategy—launching new products and features, tapping organizations deploying cloud computing, and training the vast channel base of its alliance partners who have a strong presence in the country.

Mark Collier, COO, OpenStack, affirmed, “After the US, India and China are the most important countries for us. We will target the large organizations that are either in the process of deploying, or have a cloud computing strategy in place. And cloud computing requires a lot of business transformation because of the cultural shift and dramatic changes in processes.”

 

Read More: http://www.crn.in/news/software/2013/11/15/openstack-keen-on-indian-market

Read more
Mark Baker

To paraphrase from Mark Shuttleworth’s keynote at the OpenStack Developer Summit last week in Hong Kong, building clouds is no longer exciting. It’s easy. That’s somewhat of an exaggeration, of course, as clouds are still a big choice for many enterprises, but there is still a lot of truth in Mark’s sentiment. The really interesting part about the cloud now is what you actually do with it, how you integrate it with existing systems, and how powerful it can be.

OpenStack has progressed tremendously in its first few years, and Ubuntu’s goal has been to show that it is just as stable, production-ready, easy-to-deploy and manage as any other cloud infrastructure. For our part, we feel we’ve done a good job, and the numbers certainly seem to support that. More than 3,000 people from 50 countries and 480 cities attended the OpenStack Summit in Hong Kong, a new record for the conference, and a recent IDG Connect survey found that 84 percent of enterprises plan to make OpenStack part of their future clouds.

Clearly OpenStack has proven itself. And, now, the OpenStack community’s aim is making it work even better with more technologies, more players and more platforms to do more complex things more easily. These themes were evident from a number of influential contributors at the event and require an increased focus amongst the OpenStack community:

Global Collaboration

OpenStack’s collaborative roots were exemplified early on with the opening address by Daniel Lai, Hong Kong’s CIO, who talked about how global the initially U.S.-founded project has become. There are now developers in more than 400 cities around the world with the highest concentration of developers located in Beijing.

Focus on the Core

One of the first to directly hit on the theme of needing more collaboration, though, was Mark Shuttleworth with a quote from Albert Einstein: “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.” OpenStack has grown fantastically, but we do, as a community, need to ensure we can support that growth rate. OpenStack should focus on the core services and beyond that, provide a mechanism to let many additional technologies plug in, or “let a thousand flowers bloom,” as Mark eloquently put it.

HP’s Monty Taylor also called for more collaboration between all of OpenStack’s players to really continue enhancing the core structure and principle of OpenStack. As he put it, “If your amazing plug-in works but the OpenStack core doesn’t, your plug-in is sitting on a pile of mud.” A bit blunt, but it gets to the point of needing to make sure that the core benefits of OpenStack – that an open and interoperable cloud is the only cloud for the future – are upheld.

Greasing the Wheels of Interoperability

And, that theme of interoperability was at the core of one of Ubuntu’s own announcements at the Hong Kong summit: the Ubuntu OpenStack Interoperability Lab, or Ubuntu OIL. Ubuntu has always been about giving companies choice, especially in the cloud. Our contributions to OpenStack so far have included new hypervisors, SDN stacks and the ability to run different workloads on multiple clouds.

We’ve also introduced Juju, which is one step up from a traditional configuration management tool and is able to distil functions into groups – we call them Charms – for rapid deployment of complex infrastructures and services.

Will all the new capabilities being added to OpenStack, Ubuntu OIL will test all of these options, and other non-OpenStack-centric technologies, to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments.

Collaboration and interoperability testing like this will help ensure OpenStack only becomes easier to use for enterprises, and, thus, more enticing to adopt.

For more information on Ubuntu OIL, or to suggest components for testing in the lab, email us at oil@ubuntu.com or visit http://www.ubuntu.com/cloud/ecosystem/ubuntu-oil

Read more
Prakash

  • US Number 1 Country, India Number 2!
  • Ubuntu No 1 OS.
  • KVM Number 1 Hypervisor.

Read more