Canonical Voices

Posts tagged with 'cloud'

Mark Baker

In April at the OpenStack Summit, Canonical founder Mark Shuttleworth quipped “My OpenStack, how you’ve grown” as a reference to the thousands of people in the room. OpenStack is indeed growing up and it seems incredible that this Friday, we celebrate OpenStacks’ 3rd Birthday.

Incredible – it seems like only yesterday OpenStack was a twinkle in the eyes of a few engineers getting together in Austin. Incredible that OpenStack has come so far in such a short time. Ubuntu has been with OpenStack every day of the 3 year journey so far which is why the majority of OpenStack clouds are built on Ubuntu Server and Ubuntu OpenStack continues to be one of the most popular OpenStack distributions available.

It is also why we are proud to host the London OpenStack 3rd Birthday Party at our HQ in London. We’d love to see you using OpenStack with Ubuntu and even if you don’t, you should come and celebrate OpenStack with on Friday, July 19th.

http://www.meetup.com/Openstack-London/

Read more
Mark Baker

Ubuntu developer contest offers $10,000 for the most innovative charms

Developers around the world are already saving time and money thanks to Juju, and now they have the opportunity to win money too. Today marks the opening of the Juju Charm Championship, in which developers can reap big rewards for getting creative with Juju charms.

If you haven’t met Juju yet, now’s the ideal time to dive in. Juju is a service orchestration tool, a simple way to build entire cloud environments, deploy scale and manage complex workloads using only a few commands. It takes all the knowledge of an application and wraps it up into a re-usable Juju charm, ready to be quickly deployed anywhere. And you can modify and combine charms to create a custom deployment that meets your needs.

Juju is a powerful tool, and its flexibility means it’s capable of things we haven’t even imagined yet. So we’re kicking off the Charm Championship to discover what happens when the best developers bring Juju into their clouds — with big rewards on offer.

The prizes

As well as showing off the best achievements to the community, our panel of judges will award $10,000 cash prizes to the best charmed solutions in a range of categories.

That’s not all. Qualifying participants will be eligible for a joint marketing programme with Canonical, including featured application slots on ubuntu.com,  joint webinars and more. Win the Charm Championship and your app will reach a whole new audience.

Get started today

If you’re a Juju wizard, we want to see what magic you’re already creating. If you’re not, now’s a great time to start — it only takes five minutes to get going with Juju.

The Charm Championship runs until 1 October 2013, and it’s open to individuals, teams, companies and organisations. For more details and full com

petition rules, visit the Charm Championship page.

Charm Championship page

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

The very first time the concepts behind the juju project were presented, by then still under the prototype name of Ubuntu Pipes, was about four years ago, in July of 2009. It was a short meeting with Mark Shuttleworth, Simon Wardley, and myself, when Canonical still had an office on a tall building by the Thames. That was just the seed of a long road of meetings and presentations that eventually led to the codification of these ideas into what today is a major component of the Ubuntu strategy on servers.

Despite having covered the core concepts many times in those meetings and presentations, it recently occurred to me that they were never properly written down in any reasonable form. This is an omission that I’ll attempt to fix with this post while still holding the proper context in mind and while things haven’t changed too much.

It’s worth noting that I’ve stepped aside as the project technical lead in January, which makes more likely for some of these ideas to take a turn, but they are still of historical value, and true for the time being.

Contents

This post is long enough to deserve an index, but these sections do build up concepts incrementally, so for a full understanding sequential reading is best:


Classical deployments

In a simplistic sense, deploying an application means configuring and running a set of processes in one or more machines to compose an integrated system. This procedure includes not only configuring the processes for particular needs, but also appropriately interconnecting the processes that compose the system.

The following figure depicts a simple example of such a scenario, with two frontend machines that had the Wordpress software configured on them to serve the same content out of a single backend machine running the MySQL database.

Deploying even that simple environment already requires the administrator to deal with a variety of tasks, such as setting up physical or virtual machines, provisioning the operating system, installing the applications and the necessary dependencies, configuring web servers, configuring the database, configuring the communication across the processes including addresses and credentials, firewall rules, and so on. Then, once the system is up, the deployed system must be managed throughout its whole lifecycle, with upgrades, configuration changes, new services integrated, and more.

The lack of a good mechanism to turn all of these tasks into high-level operations that are convenient, repeatable, and extensible, is what motivated the development of juju. The next sections provide an overview of how these problems are solved.


Preparing a blank slate

Before diving into the way in which juju environments are organized, a few words must be said about what a juju environment is in the first place.

All resources managed by juju are said to be within a juju environment, and such an environment may be prepared by juju itself as long as the administrator has access to one of the supported infrastructure providers (AWS, OpenStack, MAAS, etc).

In practice, creating an environment is done by running juju’s bootstrap command:

$ juju bootstrap

This will start a machine in the configured infrastructure provider and prepare the machine for running the juju state server to control the whole environment. Once the machine and the state server are up, they’ll wait for future instructions that are provided via follow up commands or alternative user interfaces.


Service topologies

The high-level perspective that juju takes about an environment and its lifecycle is similar to the perspective that a person has about them. For instance, although the classical deployment example provided above is simple, the mental model that describes it is even simpler, and consists of just a couple of communicating services:

That’s pretty much the model that an administrator using juju has to input into the system for that deployment to be realized. This may be achieved with the following commands:

$ juju deploy cs:precise/wordpress
$ juju deploy cs:precise/mysql
$ juju add-relation wordpress mysql

These commands will communicate with the previously bootstrapped environment, and will input into the system the desired model. The commands themselves don’t actually change the current state of the deployed software, but rather inform the juju infrastructure of the state that the environment should be in. After the commands take place, the juju state server will act to transform the current state of the deployment into the desired one.

In the example described, for instance, juju starts by deploying two new machines that are able to run the service units responsible for Wordpress and MySQL, and configures the machines to run agents that manipulate the system as needed to realize the requested model. An intermediate stage of that process might conceptually be represented as:

topology-step-1

The service units are then provided with the information necessary to configure and start the real software that is responsible for the requested workload (Wordpress and MySQL themselves, in this example), and are also provided with a mechanism that enables service units that were related together to easily exchange data such as addresses, credentials, and so on.

At this point, the service units are able to realize the requested model:

topology-step-2

This is close to the original scenario described, except that there’s a single frontend machine running Wordpress. The next section details how to add that second frontend machine.


Scaling services horizontally

The next step to match the original scenario described is to add a second service unit that can run Wordpress, and that can be achieved by the single command:

$ juju add-unit wordpress

No further commands or information are necessary, because the juju state server understands what the model of the deployment is. That model includes both the configuration of the involved services and the fact that units of the wordpress service should talk to units of the mysql service.

This final step makes the deployed system look equivalent to the original scenario depicted:

topology-step-3

Although that is equivalent to the classic deployment first described, as hinted by these examples an environment managed by juju isn’t static. Services may be added, removed, reconfigured, upgraded, expanded, contracted, and related together, and these actions may take place at any time during the lifetime of an environment.

The way that the service reacts to such changes isn’t enforced by the juju infrastructure. Instead, juju delegates service-specific decisions to the charm that implements the service behavior, as described in the following section.


Charms

A juju-managed environment wouldn't be nearly as interesting if all it could do was constrained by preconceived ideas that the juju developers had about what services should be supported and how they should interact among themselves and with the world.

Instead, the activities within a service deployed by juju are all orchestrated by a juju charm, which is generally named after the main software it exposes. A charm is defined by its metadata, one or more executable hooks that are called after certain events take place, and optionally some custom content.

The charm metadata contains basic declarative information, such as the name and description of the charm, relationships the charm may participate in, and configuration options that the charm is able to handle.

The charm hooks are executable files with well-defined names that may be written in any language. These hooks are run non-concurrently to inform the charm that something happened, and they give a chance for the charm to react to such events in arbitrary ways. There are hooks to inform that the service is supposed to be first installed, or started, or configured, or for when a relation was joined, departed, and so on.

This means that in the previous example the service units depicted are in fact reporting relevant events to the hooks that live within the wordpress charm, and those hooks are the ones responsible for bringing the Wordpress software and any other dependencies up.

wordpress-service-unit

The interface offered by juju to the charm implementation is the same, independently from which infrastructure provider is being used. As long as the charm author takes some care, one can create entire service stacks that can be moved around among different infrastructure providers.


Relations

In the examples above, the concept of service relationships was introduced naturally, because it’s indeed a common and critical aspect of any system that depends on more than a single process. Interestingly, despite it being such a foundational idea, most management systems in fact pay little attention to how the interconnections are modeled.

With juju, it’s fair to say that service relations were part of the system since inception, and have driven the whole mindset around it.

Relations in juju have three main properties: an interface, a kind, and a name.

The relation interface is simply a unique name that represents the protocol that is conventionally followed by the service units to exchange information via their respective hooks. As long as the name is the same, the charms are assumed to have been written in a compatible way, and thus the relation is allowed to be established via the user interface. Relations with different interfaces cannot be established.

The relation kind informs whether a service unit that deploys the given charm will act as a provider, a requirer, or a peer in the relation. Providers and requirers are complementary, in the sense that a service that provides an interface can only have that specific relation established with a service that requires the same interface, and vice-versa. Peer relations are automatically established internally across the units of the service that declares the relation, and enable easily clustering together these units to setup masters and slaves, rings, or any other structural organization that the underlying software supports.

The relation name uniquely identifies the given relation within the charm, and allows a single charm (and service and service units that use it) to have multiple relations with the same interface but different purposes. That identifier is then used in hook names relative to the given relation, user interfaces, and so on.

For example, the two communicating services described in examples might hold relations defined as:

wordpress-mysql-relation-details

When that service model is realized, juju will eventually inform all service units of the wordpress service that a relation was established with the respective service units of the mysql service. That event is communicated via hooks being called on both units, in a way resembling the following representation:

wordpress-mysql-relation-workflow

As depicted above, such an exchange might take the following form:

  1. The administrator establishes a relation between the wordpress service and the mysql service, which causes the service units of these services (wordpress/1 and mysql/0 in the example) to relate.
  2. Both service units concurrently call the relation-joined hook for the respective relation. Note that the hook is named after the local relation name for each unit. Given the conventions established for the mysql interface, the requirer side of the relation does nothing, and the provider informs the credentials and database name that should be used.
  3. The requirer side of the relation is informed that relation settings have changed via the relation-changed hook. This hook implementation may pick up the provided settings and configure the software to talk to the remote side.
  4. The Wordpress software itself is run, and establishes the required TCP connection to the configured database.

In that workflow, neither side knows for sure what service is being related to. It would be feasible (and probably welcome) to have the mysql service replaced by a mariadb service that provided a compatible mysql interface, and the wordpress charm wouldn’t have to be changed to communicate with it.

Also, although this example and many real world scenarios will have relations reflecting TCP connections, this may not always be the case. It’s reasonable to have relations conveying any kind of metadata across the related services.


Configuration

Service configuration follows the same model of metadata plus executable hooks that was described above for relations. A charm can declare what configuration settings it expects in its metadata, and how to react to setting changes in an executable hook named config-changed. Then, once a valid setting is changed for a service, all of the respective service units will have that hook called to reflect the new configuration.

Changing a service setting via the command line may be as simple as:

$ juju set wordpress title="My Blog"

This will communicate with the juju state server, record the new configuration, and consequently incite the service units to realize the new configuration as described. For clarity, this process may be represented as:

config-changed


Taking from here

This conceptual overview hopefully provides some insight into the original thinking that went into designing the juju project. For more in-depth information on any of the topics covered here, the following resources are good starting points:

Read more
Prakash

Raspberry Pi cloud

Computer scientists have made a working model of multi million pound cloud computing technology using just Lego bricks and a handful of 20 mini computers.

The University of Glasgow’s Raspberry Pi Cloud project links together 56 Raspberry Pi computer boards in racks made from Lego, which mimic the function and modular design of commercial cloud computing infrastructure.

Read More.

Read more
Mark Baker

“May you live in interesting times.” This Chinese proverb probably resonates well with teams running OpenStack in production over the last 18 months. But, at the OpenStack Summit in Portland, Ubuntu and Canonical founder Mark Shuttleworth demonstrated that life is going to get much less ‘interesting’ for people running OpenStack and that is a good thing.

OpenStack has come a long way in a short time. The OpenStack Summit event in April attracted 3000 attendees with pretty much every significant technology company represented.

Only 12 months ago, being able to install OpenStack in under a few hours was deemed to be an extraordinary feat. Since then deployment tools such as Juju have simplified the process and today very large companies such as AT&T, HP and Deutsche Telekom have been able to rapidly push OpenStack Clouds into production. This means the community has had to look into solving the next wave of problems – managing the cloud in production, upgrading OpenStack, upgrading the underlying infrastructure and applying security fixes – all without disrupting services running in the cloud.

With the majority of OpenStack clouds running on Ubuntu, Canonical has been uniquely positioned to work on this. We have spent 18 months building out Juju and Landscape, our service orchestration and systems management tools to solve these problems, and at the Summit, Mark Shuttleworth demonstrated just how far they have come. During a 30 min session, Mark performed kernel upgrades on a live running system without service interruption. He talked about the integrations and partnerships in place with VMWare, Microsoft and Inktank that mean these technologies can be incorporated into an OpenStack Cloud on Ubuntu with ease. This is is the kind of practicality that OpenStack users need and represents how OpenStack is growing up. It also makes OpenStack less “interesting” and far more adoptable by a typical user which is what OpenStack needs in order to continue its incredible growth. We at Canonical aim to be with it every step of the way.

Read more
niemeyer

Today ubuntufinder.com was updated with the latest image data for Ubuntu 13.04 and all the previous releases as well. Rather than simply hardcoding the values again, though, the JavaScript code was changed so that it imports the new JSON-based feeds that Canonical has been publishing for the official Ubuntu images that are available in EC2, thanks to recent work by Scott Moser. This means the site is always up-to-date, with no manual actions.

Although the new feeds made that quite straightforward, there was a small detail to sort out: the Ubuntu Finder is visually dynamic, but it is actually a fully static web site served from S3, and the JSON feeds are served from the Canonical servers. This means the same-origin policy won’t allow that kind of cross-domain import to be easily done without further action.

The typical workaround for this kind of situation is to put a tiny proxy within the site server to load the JSON and dispatch to the browser from the same origin. Unfortunately, this isn’t an option in this case because there’s no custom server backing the data. There’s a similar option that actually works, though: deploying that tiny proxy server in some other corner and forward the JSON payload as JSONP or with cross-origin resource sharing enabled, so that browsers can bypass the same-origin restriction, and that’s what was done.

Rather than once again doing a special tiny server for that one service, though, this time around a slightly more general tool has emerged, and as an experiment it has been put live so anyone can use it. The server logic is pretty simple, and the idea is even simpler. Using the services from jsontest.com as an example, the following URL will serve a JSON document that can only be loaded from a page that is in a location allowed by the same-origin policy:

If one wanted to load that page from a different location, it might be transformed into a JSONP document by loading it from:

Alternatively, modern browsers that support the cross-origin resource sharing can simply load pure JSON by omitting the jsonpeercb parameter. The jsonpeer server will emit the proper header to allow the browser to load it:

This service is backed by a tiny Go server that lives in App Engine so it’s fast, secure (hopefully), and maintenance-less.

Some further details about the service:

  • Results are JSON with cross-origin resource sharing by default
  • With a query parameter jsonpeercb=<callback name>, results are JSONP
  • The callback name must consist of characters in the set [_.a-zA-Z0-9]
  • Query parameters provided to jsonpeer are used when doing the upstream request
  • HTTP headers are discarded in both directions
  • Results are cached for 5 minutes on memcache before being re-fetched
  • Upstream results must be valid JSON
  • Upstream results must have Content-Type application/json or text/plain
  • Upstream results must be under 500kb
  • Both http and https work; just tweak the URL and the path accordingly

Have fun if you need it, and please get in touch before abusing it.

UPDATE: The service and blog post were tweaked so that it defaults to returning plain JSON with CORS enabled, thanks to a suggestion by James Henstridge.

Read more
Mark Baker

If you are interested in either OpenStack or MySQL (or both) then you need to know about 2 meetups running the evening of May 23rd in London.

The London OpenStack meetup.

This is the 3rd meeting to take place and promises to be a good one with 3 talks planned so far:

* Software defined networking and OpenStack – VMWare Nicera’s Andrew Kennedy
* OpenStack Summit Overview – Rackspace’s Kevin Jackson
* An introduction to the Heat API – Red Hat’s Steven Hardy

For a 4th talk we are looking at a customer example – watch this space.

To come along please register at:

http://www.meetup.com/Openstack-London/

The MySQL Meetup.

This group hasn’t met for quite some time but MySQL remains as popular as ever and new developments with MariaDB mean the group has plenty to catch up on. There 2 talks planned so far:

* HP’s database as a service – HP’s Andrew Hutching

* ‘Whatever he wants to talk about’ – MySQL and MariaDB founder Monty Widenius.

 

With David Axmark also in attendance it could be one of the most significant MySQL meetings in London ever. Not one to miss if you are interested in MySQL, MariaDB or related technologies

MySQL meetups are managed in Facebook – please register to attend here:

http://www.meetup.com/The-London-MySQL-Meetup-Group/events/110243482/

 

Of course given the events are running in rooms next to each other you are welcome to register for both and switch between them based on the schedule. We hope to see you there!

Read more
anthony-c-beckley

From our Cloud partner Inktank…

Today marks another milestone for Ceph with the release of Cuttlefish (v0.61), the third stable release of Ceph. Inktank’s development efforts for the Cuttlefish release have been focused around Red Hat support and making it easier to install and configure Ceph while improving the operational ease of integrating with 3rdparty tools, such as provisioning and billing systems. As ever, there have also been a ton of new features we have added to the object and block capabilities of Ceph, as well as with the underlying storage cluster (RADOS), alongside some great contributions from the community.

So what’s new for Ceph users in Cuttlefish?

Ease of installation:

  • Ceph-deploy: a new deployment tool which requires no other tools and allows a user to start running a multi-node Ceph cluster in minutes. Ideal for users who want to do quick proof of concepts with Ceph.
  • Chef recipes: a new set of reference Chef recipes for deploying a Ceph storage cluster, which Inktank will keep authoritative as new features emerge in Ceph. These are in addition to the Puppet scripts contributed by eNovance and Bloomberg, the Crowbar Barclamps developed with Dell, and the Juju charms produced in conjunction by Canonical, ensuring customers can install Ceph using most popular tools.
  • Fully tested RPM packages for Red Hat Enterprise Linux and derivatives, available on both the ceph.com repo and in EPEL (Extra Packages for Enterprise Linux).

Administrative functionality:

  • Admins can now create, delete or modify users and their access keys as well as manipulate and audit users’ bucket and object data using the RESTful API of the Ceph Object Gateway. This makes it easy to hook Ceph into provisioning or billing systems.
  • Administrators can now quickly and easily set a quota for a RADOS pool. This helps with capacity planning management as well as preventing specific Ceph clients from consuming all available data at the expense of other users.
  • In addition, to the pool quotas, administrators can now quickly see the total used and available capacity of a cluster using the ceph df command, very similar to how the generic UNIX df command works with other local file systems.

Ceph Block Device (RBD) Incremental Snapshots

It is now possible to just take a snapshot of the recent changes to a Ceph block image. Not only does this reduce the amount of space needed to store snapshots on a cluster, but forms the foundation for delivering disaster recovery options for volumes, as part of the popular cloud platforms such as OpenStack and CloudStack.

You can see the complete list of features in the release notes are available at  http://ceph.com/docs/master/release-notes/. You can also check out our roadmap page for more information on what’s coming up in future releases of Ceph. If you would like to contribute towards Ceph, you can visit Ceph.com for more information on how you can get started and we invite you to join our online Ceph Development Summit on Tuesday May 7th, more details available at http://wiki.ceph.com.

Read more
Prakash

 Public cloud services market in India is forecast to grow 36 per cent in 2013 to total $443 million, research firm Gartner today said.

The public cloud services market stood at $326 million in 2012, Gartner said in a statement.

Infrastructure as a service (IaaS), including cloud compute, storage and print services, which was the fastest- growing segment, grew 22.7 per cent in 2012 to $43.1 million.

It’s expected to further grow 39.6 per cent in 2013 to $60.2 million, Gartner said.

Software as a service (SaaS), which is the largest segment of the cloud services market in India, comprised 36 per cent of the total market in 2012.

Gartner expects that from 2013 through 2017, $4.2 billion will be spent on cloud services in India, of which $1.6 billion will be spent on SaaS.

Read More.

Read more
Prakash

Netflix, the popular video-streaming service that takes up a third of all internet traffic during peak traffic hours isn’t just the single largest internet traffic service. Netflix, without doubt, is also the largest pure cloud service.

netflixcloud-620x457
Netflix, with more than a billion video delivery instances per month, is the largest cloud application in the world.

At the Linux Foundation’s Linux Collaboration Summit in San Francisco, California, Adrian Cockcroft, director of architecture for Netflix’s cloud systems team, after first thanking everyone “for building the internet so we can fill it with movies”, said that Netflix’s Linux, FreeBSD, and open-source based services are “cloud native”.

By this, Cockcroft meant that even with more than a billion video instances delivered every month over the internet, “there is no datacenter behind Netflix”. Instead, Netflix, which has been using Amazon Web Services since 2009 for some of its services, moved its entire technology infrastructure to AWS in November 2012.

Read More.

Read more
Paul Oh

The emergence of public cloud computing has changed the IT landscape for developers and enterprises, making it significantly easier and more cost effective to develop and deploy new applications, services and infrastructure. Enterprises can choose among cloud providers to meet their needs for performance, features, price and flexibility that will support their technology strategy today as well as in the future.

Today, Microsoft Corp. has announced the general availability of Windows Azure Infrastructure Services, its public cloud offering with the ability to create and manage both Windows and Linux virtual machines. As part of Canonical’s Certified Public Cloud Program, Ubuntu on Windows Azure is fully certified and has been tested and optimized by Canonical and Microsoft for excellent performance and reliability. Enterprises that require both Windows and Linux can choose the right operating system for running their workloads based on application performance and availability.

Canonical and Microsoft have been working together to make Ubuntu run seamlessly on Windows Azure. As Bob Kelly, Corporate Vice President, Server and Tools Business at Microsoft commented:

“Windows Azure is committed to openness and interoperability. Having Ubuntu available to Windows Azure users is a big step forward for interoperability in the public cloud. Our customers can deploy mission critical applications on both Windows Server and Linux and across both public and private clouds.”

Ubuntu Server is highly available, secure, built for scale and provides the tools that simplify and reduce the cost of cloud deployments. So, for enterprises looking to deploy demanding cloud oriented workloads such as Hadoop, Cassandra and other scale out type applications,  Ubuntu on Windows Azure will be a familiar and well suited offering that provides maximum deployment flexibility. This includes hybrid clouds where applications and data can remain behind the company firewall for security or compliance reasons, and that are able to access public cloud resources on demand.  As the leading guest OS in most major public clouds, Ubuntu can be deployed across multiple public clouds at scale for pricing and redundancy benefits as well as avoiding lock-in to a single cloud provider.

At Canonical, we invest in the Ubuntu experience to provide the most complete combination of performance, update handling, compliance and reliability in the market. We also extend our commercial offerings of support, systems management, audit compliance and IP assurance to commercial customers using Ubuntu on certified public clouds.

Read more
Ben Howard

We are pleased to announce that Canonical has stood up official mirrors in HP Cloud's AZ-1, 2, and 3 regions.

If you are using Ubuntu Server 12.10 Cloud Images, there is no action to take; 12.10 images are by default configured to use the new mirror address.

For Ubuntu 12.04 instances, the default Ubuntu image does not automatically use the in-HP Cloud mirrors. We are currently working with HP to publish a new image that defaults to the local mirrors. If you would like to switch to the new in-HP mirrors, simply run:
          
    $ sudo sed -i -e \
            's,^archive.ubuntu.com/ubuntu,nova.clouds.archive.ubuntu.com/ubuntu,g'  \
             /etc/apt/sources.list 

    $ sudo apt-get -y update

Note: *.clouds.archive.ubuntu.com is configured using split-horizon DNS. This means that the DNS answer to queries is based on the askering IP address; only queries originating within HP Cloud are answered with the HP Cloud mirror addresses. If your DNS resolver[s] is not based in HP Cloud, then you will be unable to benefit from these new mirrors. 
 

Read more
Prakash

Google unveiled a “patent pledge” that it hopes will shield cloud software and big data developers from the type of litigation that has engulfed the mobile phone industry. The pledge, which is like a non-aggression pact, covers ten patents related to Google’s MapReduce technology.

The pledge, which Google announced on Thursday, says that developers are free to use or sell the technology described in the patents without fear of future lawsuits. The shield applies, however, only to projects based on open source software that is available to all

The ten patents included in Google’s pledge include a controversial one issued last year that covers a form of parallel processing known as MapReduce. The patent gave rise to fears that Google would be able to monopolize tools like Hadoop, which is an integral part of the so-called “big data” revolution that is fueling a wide range of new products and services. Google’s pledge appears intended to allay that fear.

Read More.

Read more
Prakash

Cloud computing represents a fundamental shift in the way technology services will be delivered to enterprises, forcing IT firms to re-look at how they operate now, according to Pat Gelsinger, chief executive officer of VMware, which provides software that enable creation of cloud computing infrastructure within corporate premises.

Gelsinger is convinced that not all major IT firms (including Indian ones) will survive this wave of technology transition. Change may mean sacrificing revenue in the short term said, Gelsinger, an Intel veteran rumoured to replace the retiring incumbent Intel CEO Paul Otellini, a rumour he denied.

Read more.

Read more
David Duffey

Today we announced a collaborative support and engineering agreement with Dell.  As part of this agreement Canonical will add Dell 11G & 12G PowerEdge models to the Ubuntu Server 12.04 LTS Certification List and Dell will add Ubuntu Server to its Linux OS Support Matrix.

In May 2012, Dell launched the OpenStack Cloud Reference Architecture using Ubuntu 12.04 LTS on select PowerEdge-C series servers. Today’s announcement expands upon that offering by combining the benefits of Ubuntu Server Certification, Ubuntu Advantage enterprise support, and Dell Hardware ProSupport across the PowerEdge line.

Dell customers can now deploy with confidence when purchasing Dell PowerEdge servers with Dell Hardware ProSupport and Ubuntu Advantage.  When these customers call into Dell, their service tag numbers will be entitled with ProSupport and Ubuntu Advantage, which will create a seamless support experience via the collaborative Dell and Canonical support and engineering relationship.

In preparation for this announcement, Canonical engineers worked with Dell to enable and validate Ubuntu Server running on Dell PowerEdge Servers.  This work resulted in improved Ubuntu Server on Dell PowerEdge support for PCIe SSD (solid state drives), 4K-block drives, EFI booting, Web Services Management, consistent network device naming, and PERC (PowerEdge RAID Controllers).

Dell hardware systems management can be done out-of-band via ipmi, iDRAC, and the Lifecycle Controller.  Dell OMSA Ubuntu packages are also available but it is recommended to use the supported out-of-band systems management tools.  Dell TechCenter is a good resource for additional technical information about running Ubuntu Server on Dell PowerEdge servers.

If you are interested in purchasing Ubuntu Advantage for your Dell PowerEdge servers, please contact the Dell Solutions team at Canonical.  If your business is already using or thinking about using a supported Ubuntu Server infrastructure in your data-center then be sure to fill out the annual Ubuntu Server and Cloud Survey to provide additional feedback.

Read more
Prakash

A very open cloud

Businesses will double the amount of data they send across networks in the next few years. In the US, that means greater use of managed IP. Australia, though, like a lot of countries, is heading in the opposite direction.

Cisco’s Virtual Networking Index provides sophisticated forecasts of how we will use networks over the next few years. It estimates that the amount of data transferred by businesses will increase from 6 trillion gigabytes last year to 12 trillion by 2016. Or, if you prefer, 12,051 petabytes.

Read More.

Read more
niemeyer

A small and fun experiment is out:

Read more
Steve George

Dell announced today an updated XPS 13, preloaded with Ubuntu, which has a full high-definition 1080p display. It will be available for sale in the USA  and Canada, but as part of this update Dell will also be making it available in parts of Europe, the Middle East and Africa.

 

As we reported in November, the Dell XPS 13 is a high-end ultramobile laptop, offering developers a complete client-to-cloud experience. It is the result of Dell’s bold Sputnik initiative which embraced the community and received terrific response from developers around the world.  With Ubuntu 12.04 LTS preloaded, the machine is perfect for developers and anyone who wants high speed, brilliant graphics and smart design.

If you’re keen to get your hands on a new Dell XPS 13 Developer Edition with Ubuntu pre-loaded, check-out our web page for more details and links:

  http://www.ubuntu.com/partners/dell/dellxps

We’ll post more links allowing you to buy in additional countries as soon as we can.

Read more
anthony-c-beckley

We are exhibiting at this year’s CeBIT event on March 5-9th, 2013 in Hannover Germany, in conjunction with our partner in the region, Teuto.net and we’re giving away number of free tickets to selected customers and partners. If you are interested in one of these tickets, please contact me at anthony.beckley@canonical.com for more information.

The Canonical/Teuto.net stand will be in the Open Source Arena (Hall 6, Stand F16, (030) and we will be showcasing two enterprise technology areas:

  • The Ubuntu Cloud Stack – demonstrating end user access to applications via an OpenStack cloud, powered by Ubuntu,
  • Ubuntu Landscape Systems Management – demonstrating ease of management of desktop, server and cloud nodes.

We will be running hourly demonstrations on our stand and attendees have the chance to win a Google Nexus 7 tablet! Simply come to out stand and watch a short demo or your chance to win If you would like to pre-register for a demonstration, email me at anthony.beckley@canonical.com

We look forward to seeing you at the show!

CeBIT draws a live audience of more than 3,000 people from over 100 different countries. In just five days the show delivers a panoramic view of the digital world’s mainstay markets: ICT and Telecommunications, Digital Media also Consumer Electronics.
To learn more about CeBIT click here.

Read more