Canonical Voices

Posts tagged with 'cloud'

admin

  • Ubuntu Server 13.10 is available from 17th October; first fully supported release of the new OpenStack Havana, with VMWare vSphere integration, faster node installation and a new version of Juju that supports ultra-dense containerised application deployment.

Canonical today announced that the next version of Ubuntu for server and cloud environments will be released on 17 October 2013.

“Ubuntu 13.10 delivers the latest and best version of OpenStack, and is the fastest, most flexible platform for scale-out computing,” says Mark Shuttleworth, Founder of Ubuntu and VP Products for Canonical. “Ubuntu is typically used in very large scale deployments. In this release we’ve tuned the cloud deployment experience for very small clusters as well, to support dev-and-test environments.” This 13.10 release makes it possible to deploy a full OpenStack cloud on only 5 servers and offers a sophisticated Landscape dashboard for the management of Ubuntu OpenStack clouds no matter their size.

Enterprise management of OpenStack clouds and the workloads deployed on them has been a focus for Canonical in the latest development cycle. “With Landscape, we simplify the lives of enterprise compliance and administration teams, with a full suite of compliance, performance monitoring and security update tools that work on all cloud and physical environments. Now we’ve added real-time dashboards for your OpenStack cloud, too” says Federico Lucifredi, who leads Ubuntu server product management.

While Ubuntu itself is an operating system, much of the recent work by Canonical and the Ubuntu community has been to deliver complete solutions and applications on top of it. The breakthrough Juju service orchestration tool from Canonical makes it easy to design, deploy, manage and scale workloads securely from a browser or the command line. In 13.10, Juju can instantly deploy an entire software environment or service as a “bundle” directly from the easy-to-use Juju GUI, improving on the previous deployment of individual components. This reduces complexity and enables administrators to share entire complex workloads consisting of many related parts.

Ubuntu leads the way with integration between OpenStack and VMware vSphere so ESXi users can interoperate with OpenStack. “The ability to deploy Ubuntu OpenStack alongside ESXi with orchestration that spans both properties is extremely valuable, bringing OpenStack right to the centre of common enterprise virtualization practice” said Mark Shuttleworth.

13.10 introduces Juju management of LXC containers, which allow multiple services to run on the same physical or virtual machine. This gives sysadmins the option of greater density, reducing the total number of machines required to run a service, and reducing cost.

A new installer enables very rapid provisioning of thousands of nodes, typically five times faster than the best traditional Linux installation process. Ubuntu is uniquely suited to rapid provisioning and re-provisioning in large-scale data centers. The Ubuntu LXC update in 13.10 provides blindingly fast (less than one second) and efficient cloning of containers for faster scaling of containerized services, unique to Ubuntu.

Ubuntu’s OpenStack distribution brings the famous “Ubuntu Just Works” usability to complex cloud deployment; clouds are simple to design, deploy and scale for private or public purposes. Ubuntu 13.10 includes Havana, the latest version of OpenStack, with new and updated tools such as Ceilometer for metering and monitoring, and Heat for auto-scaling.

Havana is also available to customers on Ubuntu 12.04 LTS thanks to the 12.04 Cloud Archive, from Canonical. This means that LTS users can get access to the latest Ubuntu OpenStack release, tools and features while continuing to enjoy the stability and maintenance commitment that backs our current LTS.

 

Availability
Ubuntu Server 13.10 will be available for download from the 17th October 2013 at: http://www.ubuntu.com/download.  OpenStack Havana release notes: https://wiki.openstack.org/wiki/ReleaseNotes/Havana

 

Read more
Prakash

Netflix has developed s Asgard, a web interface that lets engineers and developers manage their AWS infrastructure using a GUI rather than a command line.

Netflix Asgard is open source.

Paypal a big user of OpenStack has ported Asgard to OpenStack.

Read More: http://gigaom.com/2013/10/02/paypal-has-rebuilt-netflixs-cloud-management-system-for-openstack/

Read more
Mark Baker

The telco business has long prided itself on providing dependable services all day every day. Today, dial tones generally survive earthquakes, hurricanes, wars and power cuts and that is testimony to the service quality telcos provide. This high level of service quality runs through a telco’s DNA, which gives their applications the renowned ‘telco-grade’ high quality, highly scalable and constant availability. But creating such a culture comes at a cost.

 

The standards are a result of the tightly controlled software used by telcos which have been tested over many years. Strict processes are employed to minimise the chance of failure of any item in the service, and robust backup or failover services are provided in the advent of failure. While this is essential to deliver failsafe services, it also creates a restrictive environment in which launching new services based on new technologies is severely hampered.

 

As a result, new technology businesses are out-maneuvering telcos by being able to offer services based on the latest development frameworks. These are put together using agile processes and pushed into production by super smart DevOps who have planned application architectures assuming failures will happen. Whether it is Infrastructure As A Service (IAAS) platforms, a move towards IP based voice and data services, or mobile application delivery services that drive customer engagement and retention, startups and tech companies are all delivering strong solutions into the market and putting pressure on telcos to do the same.

The Telco Application Developer Summit in Bangkok, November 21st and 22nd, aims to try and accelerate the pace of new service delivery for telcos by enabling developers to discuss the benefits of DevOp and agile practises. With Ubuntu being at the centre of many of the recent innovations in the high tech space, be it OpenStack cloud, Platform As A Service (PAAS), Software Defined Networking (SDN) or public cloud computing, we are very excited to be a part of this conference. We will be in attendance and demonstrating technologies such as Juju, which enables services to be launched and scaled instantly. If you are involved in the delivery of application services for telcos you should check TADS out and maybe we will see you there.

Read more
Federico Lucifredi

Today we’re introducing some new features into Ubuntu’s systems management and monitoring tool, Landscape. Organisations will now be able to use Landscape to manage Hyperscale environments ranging from ARM to x86 low-power designs, adding to Landscape’s existing coverage of Ubuntu in the cloud, data centre server, and desktop environments. There’s an update to the Dedicated Server too, bringing SAAS and Dedicated Server versions in alignment.

Calxeda 'Serial Number 0' in Canonical's lab

Hyperscale is set to address today’s infrastructure challenges by providing compute capacity with less power for lower cost. Canonical is at the forefront of the trend. Ubuntu already powers scale-out workloads on a new wave of low-cost ultradense hardware based on x86 and ARM processors including Calxeda EnergyCore and Intel Atom designs. Ubuntu is also the default OS for HP’s Project Moonshot servers.

Calxeda 01 - Wide

This update includes support for ARM processors and allows organisations to manage thousands of Hyperscale machines as easily as one, making it more cost-effective to run growing networks spanning tens of thousands of devices. The same patch management and compliance features are available for ARM as they are for x86 environments, making Landscape the first systems management tool of a leading Linux vendor to introduce ARM support – and we are doing so on a level of feature parity across architectures.

Calxeda is the leading innovator engaged in bringing ARM chips to servers and partnered with us early on to bring Ubuntu to their new platform. “Landscape system management support for ARM is a huge step forward”, said Larry Wikelius, co-founder and Vice President at Calxeda. “Adding datacenter-class management to the Ubuntu platform for ARM demonstrates Canonical’s commitment to innovation for Hyperscale customers, who are looking to Calxeda to help improve their power efficiency.”

Calxeda 'Serial number 0' in Canonical's Boston lab

“Landscape’s support for the ARM architecture extends to all ARM SoCs supported by Ubuntu, but we adopted the Calxeda EnergyCore systems in our labs as the reference design in light of both their early arrival to market and their maturity”, said Federico Lucifredi, Product Manager for Landscape and Ubuntu Server at Canonical, adding “we are excited to be bringing Landscape to Hyperscale systems on both ARM and x86 Atom architectures.” CIOs and System Administrators considering implementing Hyperscale environments on Ubuntu will now have access to the same enterprise-grade systems management and monitoring capabilities they enjoy in their data centres today with Landscape.

Kurt Keville, HPC Researcher at Massachusetts Institute of Technology (MIT) commented: “MIT’s interest in low power computing designs aims to achieve the execution of production HPC codes at the same level of numerical performance, yet within a smaller power envelope.”  He added: “With Landscape, we can manage our ARM development clusters with the same kind of granularity we are accustomed to on x86 systems. We are able to manage ARM compute clusters without affecting our production network bandwidth in any way”.

Parallella Gen0 prototypes stack

The Parallella Board project aims to make parallel computing ubiquitous through an affordable Open Hardware platform equipped with Open Source tools. Andreas Olofsson, CEO, Adapteva said: “We selected Ubuntu as our default platform because of its popularity with the developer Community and relentless pace of updating, regularly providing our users with the newest builds for any project.”  He added: “ The availability of a management and monitoring platform like Landscape is essential to managing complexity as the scale of Parallella clusters rapidly reaches into the hundreds or even thousands of nodes.”

Parallella 01 - Processes

As we talk to customers building cloud infrastructure or big data computing environments, it’s clear that power consumption and efficient scaling are key drivers to their architectural decisions. When these considerations are coupled with Landscape’s efficiency and scalable management characteristics, we believe enterprises will be able to achieve a significant shift in both scalability and manageability in their data centre through Hyperscale architecture.

Ubuntu is the default OS for HP’s project Moonshot cartridges, ships or is available for download to every Moonshot customer, with direct support from HP backed by Canonical’s worldwide support organization.  The Landscape update today also means that the full bundle of Ubuntu Advantage support and services becomes available to Moonshot customers.

“Canonical continues to lead the way in the Hyperscale OS arena introducing full enterprise-grade support services for Ubuntu on Hyperscale hardware”, remarked Martin Stadtler, Director of Support Services at Canonical.

Landscape’s Dedicated Server edition has also been refreshed in this update. This means that those businesses choosing to keep the service onsite (rather than hosted) will benefit from the same functionality and a series of updates already available to SAAS customers, including the new audit log facility and performance enhancements, while retaining full local control of their management infrastructure.

Read more
Mark Baker

When it comes to using Linux on an enterprise server, Ubuntu is generally seen as the new challenger in a market dominated by established vendors specifically targeting enterprises. However, we are seeing signs that this is changing. The W3Techs data showing Ubuntu’s continued growth as a platform for online scale-out infrastructure is becoming well known, but a more recent highlight is a review published by Network World of five commercial Linux-based servers (note registration required to read the whole article).

The title of the review “Ubuntu impresses in Linux enterprise test” is encouraging right from the start, but what may surprise some readers are the areas in which the reviewers rated Ubuntu highly:

 

1. Transparency (Free and commercially supported versions are the same.)

This has long been a key part of Ubuntu and we are pleased that its value is gaining broader recognition. From an end user perspective this model has many benefits, primarily the zero migration cost of moving between an unsupported environment (say, in development) and a supported one (in production). With many organisations moving towards models of continuous deployment this can be extremely valuable.

2. Management tools

The reviewers seemed particularly impressed with the management tools that come with Ubuntu, supported with Ubuntu Advantage: Metal as a Service (MAAS), for rapid bare metal provisioning; Juju for service deployment and orchestration; and Landscape for monitoring, security and maintenance management. At Canonical we have invested significantly in these tools over the last few years, so it is good to know that the results have been well received.

Landscape Cloud Support

Landscape Cloud Support

3. Cloud capability

The availability of cloud images that run on public clouds is called out as being valuable, as is the inclusion of OpenStack to be able to create an OpenStack Cloud. Cloud has been a key part of Ubuntu’s focus since 2008, when we started to create and publish images onto EC2. With the huge growth of Amazon and the more recent rapid adoption of OpenStack, having cloud support baked into Ubuntu and instantly available to end users is valuable.

4. Virtualisation support

It is sometimes thought that Ubuntu is not a great virtualisation platform, mainly because it is not really marketed as being one. The reality, as recognised by the Network World reviewers, is that Ubuntu has great hypervisor support. Like some other vendors we default to KVM for general server virtualisation, but when it comes to hypervisor support for Infrastructure as a Service (IaaS), Ubuntu is far more hypervisor agnostic than many others, supporting not only KVM, but VMware ESXi, and Xen as well. Choice is a good thing.

Of course there are areas of Ubuntu that the reviewers believed to be weak – installation being the primary one. We’ll take this onboard and are confident that future releases will deliver an improved installation experience. There are areas that you could suggest are important to an enterprise that are not covered in the review – commercial application support being one – but the fact remains that viewed as a platform in its own right, with a vast array of open source applications available via Juju, Ubuntu seems to be on the right path. If it continues this way, soon it could well cease to be the challenger and become the leader.

Read more
Prakash

I have been thinking of why people should put their Disaster Recovery (DR) site in the cloud. This makes perfect sense, here is why.

Typically a DR site cost as much as the primary data centers. This is because organisations need to replicate every component of their data center. Match every server with the same specifications: CPU, memory and storage.

DR is necessary because you need business continuity when disaster strikes.

But you will invest all that in a DR and disaster may never strike. Is DR worth the investment then ?

Solution is to put the DR in the cloud. Advantages are as follows:

  • You create exact replica of your setup in the cloud.
  • You fire up the DR in the cloud, only when Disaster strikes. Which when there is no disaster you are only paying for the disk space usage.
  • You only pay for the full cloud instances when disaster strikes.
  • You not only save money but you are also more environment friendly because you are not unnecessarily keeping your servers running.
  • The cloud providers also do their own DR, which means you even enhance your redundancy further.

Are you worried about putting your data in the public cloud? Then a few companies can get together and setup their own private cloud DR.

Indian enterprises are already adoption DR in the cloud.

Read more
Mark Baker

On Monday August 26th, VMware announced the general availability of their vCloud Hybrid Service. This service, initially opened back in May to a restricted set of early adopters provides VMware customers with a means of being able to easily bring their workloads out of their own datacentres and into to the cloud.

For many customers this is exactly what they want – they may have been wanting to move some of their workloads off premise but found the prospect of switching to a full blown public cloud provider a scary prospect. vCHS offers them a great way to move workloads to the cloud without having to worry about migrating to new technologies, api compatibility or sourcing a new vendor. At Canonical we have a vision of complete workload portability across any public or private cloud. Sure, it is a requirement that the workloads run on Ubuntu but Ubuntu’s ubiquity in cloud is close to making this a reality and with our growth in usage for scale out workloads such as delivery of online infrastructure far outstripping that of other Linux platforms, it seems that end users don’t have a problem with it. It certainly seems that with our engagements around OpenStack, Nicira and vCHS, VMware believe in the ubiquity of Ubuntu in cloud. Combined with VMware’s ubiquity in the enterprise, between the 2 of us we are going to do some great things.

Read more
anthony-c-beckley

We’re excited to announce that Canonical is sponsoring and exhibiting at the forthcoming Dell Solutions Summit, August 27-29th, 2013 in Beijing, China.

Danica Han, our Director of Cloud Alliances for APAC, will be speaking at the summit about Canonical’s commitment to the Chinese market, how we meet the specific needs of Chinese users and how those customers can gain competitive advantage with Ubuntu Cloud and Client deployments.

This session will take place on August 28th from 1:30pm – 2:30pm in room 311B.

On our show pods, our team in China will showcase our market beating Cloud management and deployment solutions; Landscape – enabling customers to manage thousands of Ubuntu machines as easily as one and Juju - our game-changing Cloud service orchestration tool.

Additionally, we will be demonstrating UbuntuKylin, on Dell desktops, developed specifically for China and the Chinese user with the members of the CCN Joint Lab. UbuntuKylin was awarded the Number 1 China Open Source Project for 2013 at the eighth Open Source China – Open Source World Summit in Beijing and is an exciting development, bringing a world leading, open source desktop operating system enhanced specifically for China.

Interested in attending? Register here

We look forward to seeing you at the show!

Read more
Prakash

In Silicon Valley, tech startups typically build their businesses with help from cloud computing services — services that provide instant access to computing power via the internet — and Frenkiel’s startup, a San Francisco outfit called MemSQL, was no exception. It rented computing power from the granddaddy of cloud computing, Amazon.com.

But in May, about two years after MemSQL was founded, Frenkiel and company came down from the Amazon cloud, moving most of their operation onto a fleet of good old fashioned computers they could actually put their hands on. They had reached the point where physical machines were cheaper — much, much cheaper — than the virtual machines available from Amazon. “I’m not a big believer in the public cloud,” Frenkiel says. “It’s just not effective in the long run.”

Read More.

Read more
Prakash

Amazon CDN (Content Distribution Network) service CloudFront is now in India. They have launched with edge servers in Mumbai and Chennai.

If you are already using CloudFront, you don’t need to do anything. Now users in India will get faster services through the CloudFront.

Read more on the announcement.

Read more
Robbie

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100′s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50 :-).

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ’juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware. ;)


Read more
Prakash

From Forbes:

Cost savings… elasticity….  scalability….  load “bursting”….  storage on demand…  These are the advertised benefits of cloud computing, and they certainly help make for a solid business case for using either third-party services or a virtualized data center.

But after the agreements are signed, systems and processes are set up, and users are retrained, something unexpected happens. The  initial use cases are realized, but then additional benefits begin to emerge — sort of like the icing on the cake, but often, these unforeseen benefits provide far more value to the business than initially planned.

Read more: http://www.forbes.com/sites/joemckendrick/2013/07/21/5-benefits-of-cloud-computing-you-arent-likely-to-see-in-a-sales-brochure/

Read more
Prakash

IBM is backing Cloud Foundry the Open Source PaaS platform.

By teaming up with Pivotal and Cloud Foundry, IBM wants to help developers focus on getting apps to the cloud without having to worry about whether the underlying technology will be compatible.

The first product of the IBM-Pivotal partnership is IBM WebSphere Liberty, a lightweight version of IBM’s WebSphere Application Server that helps developers respond to enterprise and market needs more quickly by getting less complex, rapid development and deployment of Web, mobile, social and analytic applications using fewer resources, according to IBM.

Read More: http://www.crn.com/news/cloud/240158905/ibm-pivotal-partner-to-push-cloud-foundry-paas-development.htm

 

Read more
Prakash

Read more
Mark Baker

Juju, the leading tool for continuous deployment, continuous integration (CI/CD), and cloud-neutral orchestration, now has a refreshed GUI with smoother workflows for integration professionals spinning up many services across clouds like Amazon EC2 and a range of public OpenStack providers. The new GUI speeds up service design – conceptual modelling of service relationships – as well as actual deployment, providing a visual map of the relationships between services.

“The GUI is now a first-class part of the Juju experience” said Gary Poster, whose team lead the work, “with an emphasis on rapid access to the collection of service charms and better visualisation of the deployment in question”. In this milestone the Juju GUI can act as a whiteboard, so a user can mock up the service orchestration they intend to create using the same Juju GUI that they will use to manage their real, live deployments. Users can experience the new interface for themselves at jujucharms.com with no need to setup software in advance.

Juju is used by organisations that are constantly deploying and redeploying collections of services. Companies focused on media, professional services, and systems integration are the heaviest users, who benefit from having repeatable best-practice deployments across a range of cloud environments.

Juju uniquely enables the reuse of shared components called ‘charms’ for common parts of a complex service. A large portfolio of existing open source components is available from a public Charm collection, and browsing that collection is built into the new GUI. Charms are easy to find and review in the GUI, with full documentation instantly accessible. Featured, recommended and popular charms are highlighted for easy discovery. Each Charm now has more detailed information including test results from all supported providers, download count, related Charms, and a Charm code quality rating. The Charm collection includes both certified, supported Charms, and a wider range of ad-hoc Charms that are published by a large community of contributors.

The simple browser-based interface makes it easy to find reusable open source charms that define popular services like Hadoop, Storm, Ceph, OpenStack, MySQL, RabbitMQ, MongoDB, Cassandra, Mediawiki and WordPress. Information about each service, such as configuration options, is immediately available, and the charms can then be dragged and dropped directly on a canvas where they can be connected to other services, deployed and scaled. It’s also possible to export these service topologies into a human-readable and -editable format that can be shared within a team or published as a reference architecture for that deployment.

Recent additions to the public Charm collection include OpenVPN AS, Liferay, Storm and Varnish. For developers the new GUI and Charm Browser mean that their Charms are now much more discoverable. For those taking part in the Charm Championship, it’s easier to upload their Charms and use the GUI to connect them into a full solution for entry into the competition. Submit your best Charmed solution for the possibility of winning $10,000.

The management interface for Charm authors has also been enhanced and is available at  http://manage.jujucharms.com/ immediately.

See how you can use Juju to deploy OpenStack:

The current version of Juju supports Amazon EC2, HP Cloud and many other OpenStack clouds, as well as in-memory deployment for test and dev scenarios. Juju is on track for a 1.12 release in time for Ubuntu 13.10 that will enhance scalability for very large deployments, and a 2.0 release in time for Ubuntu 14.04 LTS.

See it demoed: We’ll be showing off the new Juju GUI and charm browser at OSCON on Tuesday 23rd at 9:00AM in the Service Orchestration In the Cloud with Juju workshop.

Read more
Mark Baker

We are pleased to announce a seriously good addition to the our product team: Ratnadeep (Deep) Bhattacharjee. Deep joins Canonical as Director of Cloud Product Management from VMware where he led its Cloud Infrastructure Platform effort and has a solid understanding of customer needs as they continue to move to virtual and cloud infrastructure.

Ubuntu has fast become the operating system of choice for cloud computing and Ubuntu is the most popular platform for OpenStack. With Deep’s direction, we plan to continue to lead Ubuntu OpenStack into enterprises, carriers and service providers looking for new ways to deliver next generation infrastructure without the ‘enterprise’ price tag and lock in. He will also be key in building out our great integration story with VMWare to help customers who will run heterogeneous environments. Welcome Deep!

Read more
Mark Baker

In April at the OpenStack Summit, Canonical founder Mark Shuttleworth quipped “My OpenStack, how you’ve grown” as a reference to the thousands of people in the room. OpenStack is indeed growing up and it seems incredible that this Friday, we celebrate OpenStacks’ 3rd Birthday.

Incredible – it seems like only yesterday OpenStack was a twinkle in the eyes of a few engineers getting together in Austin. Incredible that OpenStack has come so far in such a short time. Ubuntu has been with OpenStack every day of the 3 year journey so far which is why the majority of OpenStack clouds are built on Ubuntu Server and Ubuntu OpenStack continues to be one of the most popular OpenStack distributions available.

It is also why we are proud to host the London OpenStack 3rd Birthday Party at our HQ in London. We’d love to see you using OpenStack with Ubuntu and even if you don’t, you should come and celebrate OpenStack with on Friday, July 19th.

http://www.meetup.com/Openstack-London/

Read more
Mark Baker

Ubuntu developer contest offers $10,000 for the most innovative charms

Developers around the world are already saving time and money thanks to Juju, and now they have the opportunity to win money too. Today marks the opening of the Juju Charm Championship, in which developers can reap big rewards for getting creative with Juju charms.

If you haven’t met Juju yet, now’s the ideal time to dive in. Juju is a service orchestration tool, a simple way to build entire cloud environments, deploy scale and manage complex workloads using only a few commands. It takes all the knowledge of an application and wraps it up into a re-usable Juju charm, ready to be quickly deployed anywhere. And you can modify and combine charms to create a custom deployment that meets your needs.

Juju is a powerful tool, and its flexibility means it’s capable of things we haven’t even imagined yet. So we’re kicking off the Charm Championship to discover what happens when the best developers bring Juju into their clouds — with big rewards on offer.

The prizes

As well as showing off the best achievements to the community, our panel of judges will award $10,000 cash prizes to the best charmed solutions in a range of categories.

That’s not all. Qualifying participants will be eligible for a joint marketing programme with Canonical, including featured application slots on ubuntu.com,  joint webinars and more. Win the Charm Championship and your app will reach a whole new audience.

Get started today

If you’re a Juju wizard, we want to see what magic you’re already creating. If you’re not, now’s a great time to start — it only takes five minutes to get going with Juju.

The Charm Championship runs until 1 October 2013, and it’s open to individuals, teams, companies and organisations. For more details and full com

petition rules, visit the Charm Championship page.

Charm Championship page

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

The very first time the concepts behind the juju project were presented, by then still under the prototype name of Ubuntu Pipes, was about four years ago, in July of 2009. It was a short meeting with Mark Shuttleworth, Simon Wardley, and myself, when Canonical still had an office on a tall building by the Thames. That was just the seed of a long road of meetings and presentations that eventually led to the codification of these ideas into what today is a major component of the Ubuntu strategy on servers.

Despite having covered the core concepts many times in those meetings and presentations, it recently occurred to me that they were never properly written down in any reasonable form. This is an omission that I’ll attempt to fix with this post while still holding the proper context in mind and while things haven’t changed too much.

It’s worth noting that I’ve stepped aside as the project technical lead in January, which makes more likely for some of these ideas to take a turn, but they are still of historical value, and true for the time being.

Contents

This post is long enough to deserve an index, but these sections do build up concepts incrementally, so for a full understanding sequential reading is best:


Classical deployments

In a simplistic sense, deploying an application means configuring and running a set of processes in one or more machines to compose an integrated system. This procedure includes not only configuring the processes for particular needs, but also appropriately interconnecting the processes that compose the system.

The following figure depicts a simple example of such a scenario, with two frontend machines that had the Wordpress software configured on them to serve the same content out of a single backend machine running the MySQL database.

Deploying even that simple environment already requires the administrator to deal with a variety of tasks, such as setting up physical or virtual machines, provisioning the operating system, installing the applications and the necessary dependencies, configuring web servers, configuring the database, configuring the communication across the processes including addresses and credentials, firewall rules, and so on. Then, once the system is up, the deployed system must be managed throughout its whole lifecycle, with upgrades, configuration changes, new services integrated, and more.

The lack of a good mechanism to turn all of these tasks into high-level operations that are convenient, repeatable, and extensible, is what motivated the development of juju. The next sections provide an overview of how these problems are solved.


Preparing a blank slate

Before diving into the way in which juju environments are organized, a few words must be said about what a juju environment is in the first place.

All resources managed by juju are said to be within a juju environment, and such an environment may be prepared by juju itself as long as the administrator has access to one of the supported infrastructure providers (AWS, OpenStack, MAAS, etc).

In practice, creating an environment is done by running juju’s bootstrap command:

$ juju bootstrap

This will start a machine in the configured infrastructure provider and prepare the machine for running the juju state server to control the whole environment. Once the machine and the state server are up, they’ll wait for future instructions that are provided via follow up commands or alternative user interfaces.


Service topologies

The high-level perspective that juju takes about an environment and its lifecycle is similar to the perspective that a person has about them. For instance, although the classical deployment example provided above is simple, the mental model that describes it is even simpler, and consists of just a couple of communicating services:

That’s pretty much the model that an administrator using juju has to input into the system for that deployment to be realized. This may be achieved with the following commands:

$ juju deploy cs:precise/wordpress
$ juju deploy cs:precise/mysql
$ juju add-relation wordpress mysql

These commands will communicate with the previously bootstrapped environment, and will input into the system the desired model. The commands themselves don’t actually change the current state of the deployed software, but rather inform the juju infrastructure of the state that the environment should be in. After the commands take place, the juju state server will act to transform the current state of the deployment into the desired one.

In the example described, for instance, juju starts by deploying two new machines that are able to run the service units responsible for Wordpress and MySQL, and configures the machines to run agents that manipulate the system as needed to realize the requested model. An intermediate stage of that process might conceptually be represented as:

topology-step-1

The service units are then provided with the information necessary to configure and start the real software that is responsible for the requested workload (Wordpress and MySQL themselves, in this example), and are also provided with a mechanism that enables service units that were related together to easily exchange data such as addresses, credentials, and so on.

At this point, the service units are able to realize the requested model:

topology-step-2

This is close to the original scenario described, except that there’s a single frontend machine running Wordpress. The next section details how to add that second frontend machine.


Scaling services horizontally

The next step to match the original scenario described is to add a second service unit that can run Wordpress, and that can be achieved by the single command:

$ juju add-unit wordpress

No further commands or information are necessary, because the juju state server understands what the model of the deployment is. That model includes both the configuration of the involved services and the fact that units of the wordpress service should talk to units of the mysql service.

This final step makes the deployed system look equivalent to the original scenario depicted:

topology-step-3

Although that is equivalent to the classic deployment first described, as hinted by these examples an environment managed by juju isn’t static. Services may be added, removed, reconfigured, upgraded, expanded, contracted, and related together, and these actions may take place at any time during the lifetime of an environment.

The way that the service reacts to such changes isn’t enforced by the juju infrastructure. Instead, juju delegates service-specific decisions to the charm that implements the service behavior, as described in the following section.


Charms

A juju-managed environment wouldn't be nearly as interesting if all it could do was constrained by preconceived ideas that the juju developers had about what services should be supported and how they should interact among themselves and with the world.

Instead, the activities within a service deployed by juju are all orchestrated by a juju charm, which is generally named after the main software it exposes. A charm is defined by its metadata, one or more executable hooks that are called after certain events take place, and optionally some custom content.

The charm metadata contains basic declarative information, such as the name and description of the charm, relationships the charm may participate in, and configuration options that the charm is able to handle.

The charm hooks are executable files with well-defined names that may be written in any language. These hooks are run non-concurrently to inform the charm that something happened, and they give a chance for the charm to react to such events in arbitrary ways. There are hooks to inform that the service is supposed to be first installed, or started, or configured, or for when a relation was joined, departed, and so on.

This means that in the previous example the service units depicted are in fact reporting relevant events to the hooks that live within the wordpress charm, and those hooks are the ones responsible for bringing the Wordpress software and any other dependencies up.

wordpress-service-unit

The interface offered by juju to the charm implementation is the same, independently from which infrastructure provider is being used. As long as the charm author takes some care, one can create entire service stacks that can be moved around among different infrastructure providers.


Relations

In the examples above, the concept of service relationships was introduced naturally, because it’s indeed a common and critical aspect of any system that depends on more than a single process. Interestingly, despite it being such a foundational idea, most management systems in fact pay little attention to how the interconnections are modeled.

With juju, it’s fair to say that service relations were part of the system since inception, and have driven the whole mindset around it.

Relations in juju have three main properties: an interface, a kind, and a name.

The relation interface is simply a unique name that represents the protocol that is conventionally followed by the service units to exchange information via their respective hooks. As long as the name is the same, the charms are assumed to have been written in a compatible way, and thus the relation is allowed to be established via the user interface. Relations with different interfaces cannot be established.

The relation kind informs whether a service unit that deploys the given charm will act as a provider, a requirer, or a peer in the relation. Providers and requirers are complementary, in the sense that a service that provides an interface can only have that specific relation established with a service that requires the same interface, and vice-versa. Peer relations are automatically established internally across the units of the service that declares the relation, and enable easily clustering together these units to setup masters and slaves, rings, or any other structural organization that the underlying software supports.

The relation name uniquely identifies the given relation within the charm, and allows a single charm (and service and service units that use it) to have multiple relations with the same interface but different purposes. That identifier is then used in hook names relative to the given relation, user interfaces, and so on.

For example, the two communicating services described in examples might hold relations defined as:

wordpress-mysql-relation-details

When that service model is realized, juju will eventually inform all service units of the wordpress service that a relation was established with the respective service units of the mysql service. That event is communicated via hooks being called on both units, in a way resembling the following representation:

wordpress-mysql-relation-workflow

As depicted above, such an exchange might take the following form:

  1. The administrator establishes a relation between the wordpress service and the mysql service, which causes the service units of these services (wordpress/1 and mysql/0 in the example) to relate.
  2. Both service units concurrently call the relation-joined hook for the respective relation. Note that the hook is named after the local relation name for each unit. Given the conventions established for the mysql interface, the requirer side of the relation does nothing, and the provider informs the credentials and database name that should be used.
  3. The requirer side of the relation is informed that relation settings have changed via the relation-changed hook. This hook implementation may pick up the provided settings and configure the software to talk to the remote side.
  4. The Wordpress software itself is run, and establishes the required TCP connection to the configured database.

In that workflow, neither side knows for sure what service is being related to. It would be feasible (and probably welcome) to have the mysql service replaced by a mariadb service that provided a compatible mysql interface, and the wordpress charm wouldn’t have to be changed to communicate with it.

Also, although this example and many real world scenarios will have relations reflecting TCP connections, this may not always be the case. It’s reasonable to have relations conveying any kind of metadata across the related services.


Configuration

Service configuration follows the same model of metadata plus executable hooks that was described above for relations. A charm can declare what configuration settings it expects in its metadata, and how to react to setting changes in an executable hook named config-changed. Then, once a valid setting is changed for a service, all of the respective service units will have that hook called to reflect the new configuration.

Changing a service setting via the command line may be as simple as:

$ juju set wordpress title="My Blog"

This will communicate with the juju state server, record the new configuration, and consequently incite the service units to realize the new configuration as described. For clarity, this process may be represented as:

config-changed


Taking from here

This conceptual overview hopefully provides some insight into the original thinking that went into designing the juju project. For more in-depth information on any of the topics covered here, the following resources are good starting points:

Read more