I ran across an article last week about the fear of cloud lock-in being a “key concern of companies considering a cloud move“. The article was spot on in pointing out that dependence upon some of the higher level public cloud service features hinders a user’s ability to migrate to another cloud. There is a real risk in being locked into a public cloud service, not only due to dependence on the vendor’s services, but also the complexity and costs of trying to move your data out. The article concludes by stating that there “aren’t easy answers to this problem“, which I think is true…but I also think by simply keeping two things in mind, a user can do a lot to mitigate the lock-in risk.
Whatever solutions you decide to deploy, it’s absolutely critical that you choose an operating system not produced by the public cloud provider. This recent fad of public cloud providers creating their own specific OS is just history repeating itself, where HP-UX, IRIX, Solaris, and AIX are being replaced with the likes of GCEL and Amazon Linux. Sure, the latter are Linux-based, but just like the proprietary UNIX operating systems of the past, they are developed internally, only support the infrastructure they’re designed for, and are only serviceable by the company that produces them. Of course the attraction to using these operating systems is understandable, because the provider can offer them for “free” to users desiring a supported OS in the cloud. They can even price services lower to customers who use their OS as an incentive and “benefit”, with the claim it allows them to provide better and faster support. It’s a perfect solution….at first. However, once you’ve deployed your solution to a public cloud vendor-specific OS, you have made a huge first step towards lock-in. Sure, the provider can say their OS is based on an independently produce operating system, but that means nothing once the two have diverged due to security updates and fixes, not to mention release schedules and added features. There’s no way the public cloud vendor OS can keep up, and they really have no incentive to, because they’ve already got you….the longer you stay on their OS, the more you will depend on their application and library versions, thus the deeper you get. A year or two down the road, another public cloud provider pops up with better service and/or prices, but you can’t move without the risk of extended downtimes and/or loss of data, in addition to the costs of paying your IT team the overtime it will take to architect such a migration. We’ve all been here before with proprietary UNIX and luckily Linux arrived on the scene just in time to save us.
Most of the lock-in features provided by public clouds are simply “Services as a Service”, be it a database service, big data/mapreduce service, or a development platform service like rails or node. All of these services are just applications easily deployed, scaled, and connectable to existing solutions. Of course it’s easy to understand the attraction to using these public cloud provider services, because it means no setup, no maintenance, and someone else to blame if s**t goes sideways with the given service. However, again by accepting these services, you are also accepting a level of lock in. By creating/adapting your solution(s) to use the load balancing, monitoring, and/or database service, you are making them less portable and thus harder/costlier for you to migrate. I can’t blame the providers for doing this, because it makes *perfect* sense from a business perspective:
I’m providing a service that is commoditized…I can only play price wars for so long….so how can I keep my customers once that happens….services! And what’s more, I don’t want them to easily use another cloud, so I’ll make sure my services require you to utilize my API….possibly even provide a better experience on my own OS.
Now I’m not saying you shouldn’t use these services, but you should be careful of how much of them you consume and depend on. If you ever intend or need to migrate, you will want a solution that covers the scenario of the next cloud provider not having the same service…or the service priced at a higher rate than you can afford…or the service quality/performance not being as good. This is where having a good service orchestration solution becomes critical, and if you don’t want to believe me…just ask folks at IBM or OASIS. And for the record, service orchestration is not configuration management….and you can’t get their by placing a configuration management tool in the cloud. Trying to get configuration management tools to do service orchestration is like trying to teach a child to drive a car. Sure, it can be done pretty well in a controlled empty parking lot…on a clear day. However, once you add unpredictable weather, pedestrians, and traffic, it gets real bad, real quick. Why? Because just like your typical configuration management tool, a child lacks the intelligence to react and adapt to the changing conditions in the environment.
Obviously I’m going to encourage the use of Ubuntu Server, but not just because I work for Canonical or am an Ubuntu community member, but because I actually believe it’s currently the best option around. Canonical and Ubuntu Server community members have put countless hours and effort into ensuring Ubuntu Server runs well in the cloud, and Canonical is working extremely hard with public cloud providers to ensure our users can depend on our images and public cloud infrastructure to get the fastest, cheapest, and most efficient cloud experience possible. There’s much more to running well in the cloud than just putting up an image and saying “go!”. Just to name a few examples: there’s insuring all instance sizes are supported, adding in-cloud mirrors across regions and zones to ensure faster/cheaper updates, natively packaging API tools and hosting them in the archives, updating images with SRUs to avoid costly time spent updating at first boot, daily development images made available, and ensuring Juju works within the cloud to allow for service orchestration and migration to other supported public clouds.
Speaking of Juju, we’ve also invested years (not months….YEARS) into our service orchestration project, and I can promise you that no one else, right now, has anything that can come close to what it can do. Sure, there are plenty of people talking about service orchestration…writing about service orchestration….and some might even have a prototype or beta of a service orchestration tool, but no one comes close to what we have in Juju…no one has the community engagement behind their toolset…that’s growing everyday. I’m not saying Juju is perfect by any means, but it’s the best you’re going to find if you are really serious about doing service orchestration in the cloud or even on the metal.
Over the next 12 months, you will see Ubuntu continue to push the limits of what users can expect from their operating system when it comes to scale-out computing. You have already seen what the power of the Ubuntu community can do with a phone and tablet….just watch what we do for the cloud.
Wow…I just realized how long it’s been since I did a blog post, so apologies for that first off. FWIW, it’s not that I haven’t had any good things to say or write about, it’s just that I haven’t made the time to sit down and type them out….I need a blog thought transfer device or something . Anyway, with all the talk about Ubuntu doing a rolling release, I’ve been thinking about how that would affect Ubuntu Server releases, and more importantly….could Ubuntu Server roll as well? In answering this question, I think it comes down to two main points of consideration (beyond what the client flavors would already have to consider).
We have a lot of anecdotal data and some survey evidence that most Ubuntu Server users mainly deploy the LTS. I doubt this surprises people, given the support life for an LTS Ubuntu Server release is 5 years, versus only 18 months for a non-LTS Ubuntu Server release. Your average sysadmin is extremely risk adverse (for good reason), and thus wants to minimize any risk to unwanted change in his/her infrastructure. In fact, most production deployments also don’t even pull packages from the main archives, instead they mirror them internally to allow for control of exactly what and when updates and fixes roll out to internal client and/or server machines. Using a server operating system that requires you to upgrade every 18 months, to continue getting fixes and security updates, just doesn’t work in environments where the systems are expected to support 100s to 1000s of users for multiple years, often without significant downtime. With that said, I think there are valid uses of non-LTS releases of Ubuntu Server, with most falling into two main categories: Pre-Production Test/Dev or Start-Ups, with the reasons actually being the same. The non-LTS version is perfect for those looking to roll out products or solutions intended to be production ready in the future. These releases provide users a mechanism to continually test out what their product/solution will eventually look like in the LTS as the versions of the software they depend upon are updated along the way. That is, they’re not stuck having to develop against the old LTS and hope things don’t change too much in two years, or use some “feeder” OS, where there’s no guarantee the forked and backported enterprise version will behave the same or contain the same versions of the software they depend on. In both of these scenarios, the non-LTS is used because it’s fluid, and going to a rolling release only makes this easier…and a little better, I dare say. For one, if the release is rolling, there’s no huge release-to-release jump during your test/dev cycle, you just continue to accept updates when ready. In my opinion, this is actually easier in terms of rolling back as well, in that you have less parts moving all at once to roll back if needed. The second thing is that the process for getting a fix from upstream or a new feature is much less involved because there’s no SRU patch backporting, just the new release with the new stuff. Now admittedly, this also means the possibility for new bugs and/or regressions, however given these versions (or ones built subsequently) are destined to be in the next LTS anyway, the faster the bugs are found out and sorted, the better for the user in the long term. If your solution can’t handle the churn, you either don’t upgrade and accept the security risk, or you smoke test your solution with the new package versions in a duplicate environment. In either case, you’re not running in production, so in theory…a bug or regression shouldn’t be the end of the world. It’s also worth calling out that from a quality and support perspective, a rolling Ubuntu Server means Ubuntu developers and Canonical engineering staff who normally spend a lot of time doing SRUs on non-LTS Ubuntu Server releases, can now focus efforts on the Ubuntu Server LTS release….where we have a majority of users and deployments.
In terms of Juju, a move to a rolling release tremendously simplifies some things and mildly complicates others. From the point of view of a charm author, this makes life much easier. Instead of writing a charm to use a package in one release, then continuously duplicating and updating it to work with subsequent releases that have newer packages, you only maintain two charms…maximum of three if you want to include options for running code from upstream. The idea is that every charm in the collection would default to using packages from the latest Ubuntu Server LTS, with options to use the packages in the rolling release, and possibly an extra option to pull and deploy direct from upstream. We already do some of this now, but it varies from charm to charm…a rolling server policy would demand we make this mandatory for all accepted charms. The only place where the rules would be slighlty different, are in the Ubuntu Cloud Archives, where the packages don’t roll, instead new archive pockets are created for each OpenStack release. From a users perspective, a rolling release is good, yet is also complicated unless we help…and we will. In terms of the good, users will know every charmed service works and only have to decide between LTS and rolling as the deployment OS, where as now, they have to choose a release, then hope the charm has been updated to support that release. The reduction in charm-to-release complexity also allows us to do better testing of charms because we don’t have to test every charm against oneiric, precise, raring, “s”, etc, just precise and the rolling release….giving us more time to improve and deepen our test suites.
With all that said, a move to a rolling Ubuntu Server release for non-LTS also adds the danger of inconsistent package versions for a single service in a deployment. For example, you could deploy a solution with 5 instances of wordpress 3.5.1 running, we update the archive to wordpress 3.6, then you decide to add 3 more units, thus giving you a wordpress service of mixed versions….this is bad. So how do we solve this? It’s actually not that hard. First, we would need to ensure that Juju never automatically adds units to an existing service if there’s a mismatch in the version of binaries between the currently deployed instances and the new ones about to be deployed. If Juju detected the binary inconsistency, it would need to return an error, optionally asking the user if he/she wanted it to upgrade the currently running instances to match the new binary versions. We could also add some sort of –I-know-what-I-am-doing option to give the freedom to those users who don’t care about having version mismatches. Secondly, we should ensure an existing deployment can always grow itself without requiring a service upgrade. My current thinking around this is that we’d create a package caching charm, that can be deployed against any existing Juju deployment. The idea is much like squid-deb-proxy (accept the cache never expires or renews), where the caching instance acts as the archive mirror for the other instances in the deployment, providing the same cached packages deployed in that given solution. The package cache should be ran in a separate instance with persistent storage, so that even if the service completely goes down, it can be restored with the same packages in the cache.
I honestly think we can and should consider it, but I’d also like to hear the concerns of folks who think we shouldn’t.
The amount of uptake seen with Ubuntu Server over the past year has been extremely rewarding and simply amazing. Infrastructure as a Service (IaaS), a.k.a. Public Cloud, providers are popping up left and right, all wanting to provide Ubuntu Server…all helping to further cement Ubuntu Server’s position as the OS for the cloud.
With that said, I’ve started to become concerned about the way in which some of these IaaS providers distribute Ubuntu. Ubuntu developers create, publish, and regularly update images on Amazon Web Services and Microsoft Azure. Canonical hosts and maintains internal archive mirrors in these clouds to provide a low-latency, low-cost update mechanism to users. Finally, Canonical engineers purposely designed in a pluggable cloud provider API approach to Ubuntu’s service orchestration application, Juju, to lower the operational barriers that often place limitations on cross-cloud workload and service migrations. We do all this to help ensure cross-platform consistency for Ubuntu Server users, i.e. workloads and applications ran on Ubuntu Server behave in the same manner on bare metal machines and across IaaS providers.
Some IaaS providers and users have decided to produce and host their own Ubuntu Server images without the involvement of the Ubuntu Project or Canonical. I won’t go into the legal aspects of this, because I’m no lawyer. However, I believe there is a real risk to users when these images are modified in some way, but still presented as “official” Ubuntu Server images. Whether the changes are minor, like redirecting fixes and security updates to internal unofficial mirrors, or major, like making changes to OS and/or applications provided in the images themselves, labeling the images as “official”Ubuntu Server is a misrepresentation of the project and the product. There is a real and legitimate risk of users losing out on the cross-platform assurance that the Ubuntu project and Canonical work so hard to provide due to the images having untested code or simply being out of sync on fixes and updates. Furthermore, there’s no guarantee that bug fixes made to these modified images will ever make it into the official distro, thus creating a further fork between expected behavior across both bare metal and cloud platforms. All of this has the potential to lead to poor user experience that’s very damaging to the reputation of Ubuntu the project and product, not to mention Canonical as it’s sponsor.
We ,within the Ubuntu Server team, work extremely hard to ensure our community can depend on having the same user experience and application execution results across all supported platforms, bare metal or cloud. So…if you are a IaaS provider, and you elect to produce and distribute modified Ubuntu Server images, please…please ensure your users are aware of this by labeling them as customized derivatives. Let them know that by using these modified images they potentially run the risk of being delayed in getting bug fixes and security updates…and that differences in OS and application behavior from your changes can lead to higher levels of complexity if/when they have a need to move workloads and services to/from other official Ubuntu Server deployments.
Thanks…we now return you to your regularly scheduled program.
With the release of Ubuntu Server 12.04 LTS quickly approaching, the Ubuntu Server Team has been working extremely hard on ensuring OpenStack Essex will be of high quality and tightly integrated into Ubuntu Cloud. As with prior Long Term Support releases, Canonical commits to maintaining Ubuntu Server 12.04 LTS for five years, which means users receive five years of maintenance for the OpenStack Essex packages we provide in main. With that said, we recognize that OpenStack is still a relatively young project moving at a tremendous rate of innovation right now, with features and fixes already planned for Folsom that some users require for their production deployment. In the past, these users would have to upgrade off the LTS, in order to get maintenance for the OpenStack release they need on Ubuntu Server… thus foregoing the five year maintenance they want and need for their production deployment. We wholeheartedly believe there are situations where moving to the next release of Ubuntu (12.10, 13.04, etc) for newer OpenStack releases works just fine, especially for test/dev deployments. However, we also know there will be many situations where users cannot afford the risk and/or the cost of upgrading their entire cloud infrastructure just to get the benefits of a newer OpenStack release, and we need to have a solution that fits their needs. After thinking about what users want and where most people expect OpenStack go in terms of continued innovation and stability, we have decided to provide Ubuntu users with two options for maintenance and support in the 12.04 LTS.
The first option is that users can stay with the shipped version of OpenStack (Essex) and remain with it for the full life of the LTS. As per the Ubuntu LTS policy, we commit to maintaining and supporting the Essex release for 5 years. The point releases will also ship the Essex version of OpenStack, along with any bug fixes or security updates made available since its release.
The second option involves Canonical’s Ubuntu Cloud archive, which we are officially announcing today. Users can elect to enable this archive, and install newer releases of OpenStack (and the dependencies) as they become available up through the next Ubuntu LTS release (presumably 14.04). Bug processing and patch contributions will follow standard Ubuntu practice and policy where applicable. Canonical commits to maintaining and supporting new OpenStack releases for Ubuntu Server 12.04 LTS in our Ubuntu Cloud archive for at least 18 months after they release. Canonical will stop introducing new releases of OpenStack for Ubuntu Server 12.04 LTS into the Ubuntu Cloud archive with the version shipped in the next Ubuntu Server LTS release (presumably 14.04). We will maintain and support this last updated release of OpenStack in the Ubuntu Cloud archive for 3 years, i.e. until the end of the Ubuntu 12.04 LTS lifecycle.
In order to allow for a relatively easy upgrades, and still adhere to Ubuntu processes and policy, we have elected to have archive.canonical.com be the home of the Ubuntu Cloud archive. We will enable update paths for each OpenStack release.
Ubuntu’s release policy states that once an Ubuntu release has been published, updates must follow a special procedure called a stable release update, or SRU, and are delivered via the -updates archive. These updates are restricted to a specific set of characteristics:
Exceptions to the SRU policy are possible. However, for this to occur the Ubuntu Technical Board must approve the exception, which must meet their guidelines:
Once approved by the Tech Board, the exception must have a documented update policy, e.g. http://wiki.ubuntu.com/LandscapeUpdates. Based on these guidelines and the core functionality OpenStack serves in Ubuntu Cloud, the Ubuntu Server team did not feel it was in the best interest of their users, nor Ubuntu in general, to pursue an SRU exception.
The Ubuntu Backports process (excludes kernel) provides us a mechanism for releasing package updates for stable releases that provide new features or functionality. Changes were recently made to `apt` in Ubuntu 11.10, whereby it now only installs packages from Backports when they are explicitly requested. Prior to 11.10, `apt` would install everything from Backports once it was enabled, which led to packages being unintentionally upgraded to newer versions. The primary drawbacks with using the Backports archive is that the Ubuntu Security team does not provide updates for the archive, it’s a bit of a hassle to enable per package updates, and Canonical doesn’t traditionally offer support services for the packages hosted there. Furthermore, with each new release of OpenStack, there are other applications that OpenStack depends on that also must be at certain levels. By having more than one version of OpenStack in the same Backports archive, we run a huge risk of having backward compatibility issues with these dependencies.
In order for us to ensure users have a safe and reliable upgrade path, we will establish a QA policy where all new versions and updated dependencies are required to pass a specific set of regression tests with a 100% success rate. In addition:
Only upon successfully exiting QA will packages be pushed into the Ubuntu Cloud archive.
Good question. The cycle could repeat itself, however at this point Canonical is not making such a commitment. If the rate of innovation and growth of the OpenStack project matures to a point where users become less likely to need the next release for its improved stability and/or quality, and instead just want it for a new feature, then we would likely return to our traditional LTS maintenance and support model.
Okay, so now that I got your attention….let me explain.
Over this past year and a half (maybe a little longer), I’ve seen Ubuntu Server explode in number and types of deployments, specifically around areas involving cloud computing, but also in situations involving big data and ARM server deployments. This has all occurred at a time when people and organizations are having to do more with less…less lab space…less power…less people, which of course all leads to the real desire of operating at less financial cost. I’ve come to the conclusion that me saying we should focus Ubuntu Server on being the best OS for cloud computing at the 11.10 UDS was aiming too low. It’s awesome that we’ve essentially done this with our OpenStack integration efforts for Ubuntu Cloud, but we can do more…we can do better. I now believe that for 12.04LTS and beyond, what Ubuntu Server should actually drive towards is being the best OS for scale-out computing.
Scale-out computing is the next evolutionary step in enterprise server computing. It used to be that if you needed an enterprise worthy server you had to buy a machine with a bunch of memory, high-end CPU configuration, and a lot of fast storage. You also needed to plan ahead to ensure what you purchased had enough open CPU and memory slots, as well as drive bays, to make sure you could upgrade when demand required it. When the capacity limit (cpu, memory, and/or storage) of this server was hit, you had to replace it with a newer, often more expensive one, again planning for upgrades down the road. Finally, to ensure high availability, you had to have one or two more of these servers with the same configuration. Companies like Google, Amazon, and Facebook then came along and recognized that they could use low-cost, commodity hardware to build “pizza box” servers to do the same job, instead of relying on expensive, mainframe-like servers that needed costly redundancy built into every deployment. These organizations realized that they could rely on a lot of cheap, easy-to-find (and replace) servers to effectively do the job a few scaled-up, high-end (and cost) servers could tackle. More work could be accomplished, with a reduced risk of failure by exploiting the advantages a scale-out solution provided. If a machine were to die in a comparable scale-up configuration, it would be very costly in both time and money to repair or replace it. The scale-out approach allowed them to use only what they needed and quickly/easily replace systems when they went down.
Fast forward to today, and we have an explosion of service and infrastructure applications, like Hadoop, Ceph, and OpenStack, architected and built for scale-out deployments. We even have the Open Compute Project focused on designing servers, racks, and even datacenters to specifically meet the needs of scale-out computing. It’s clear that scale-out computing is overtaking scale-up as the preferred approach to most of today’s computational challenges.
It’s not all rainbows and unicorns though…scale-out comes with it’s own inherent problems. There’s a great paper published by IBM Research called, Scale-up x Scale-out: A Case Study using Nutch/Lucene, where the researchers set out to measure and compare the performance of a scale-up versus scale-out approach to running a combined Nutch/Lucene workload. Nutch/Lucene is an opensource framework written in Java for implementing search applications consisting of three major components: crawling, indexing, and query. Their results indicated that “scale-out solutions have an indisputable performance and price/performance advantage over scale-up”, and that “even within a scale-up system, it was more effective to adopt a “scale-out-in-a-box” approach than a pure scale-up to utilize its processors efficiently”, i.e use virtualization technologies like KVM. However, they also go on to conclude that
“scale-out systems are still in a significant disadvantage with respect to scale-up when it comes to systems management. Using the traditional concept of management cost being proportional to the number of images, it is clear that a scale-out solution will have a higher management cost than a scale-up one.”
These disadvantages are precisely what I see Ubuntu Server attempting to account for over the next few years. I believe that in Ubuntu Server 12.04LTS, we have already started to address these issues in several specific ways.
One obvious issue with scale-out computing is the need for space to store your servers and provide enough power to run/cool them. We haven’t figured out how to shrink the size of your server through code, so we can’t help with the space constraints. However, we have started to develop solutions that can help administrators use less power to run their deployments. For example, we created PowerNap, which is a configurable daemon that can bring a running server to a lower power state according to a set of configuration preferences and triggers.
As a company, Canonical also began investing in supporting processor technologies that focused on delivering a high rate of operations at low-power consumption rates. ARM has a long-standing history of providing processors that use very little power. The potential for server applications, meant you could drive server processor density up and still keep power consumption relatively low. With this greater density, server manufacturers started to see opportunities for building very high speed interconnects that allow these processors to share data and cooperate quickly and easily. ARM server technology companies such as Calxeda can now build computing grids that won’t require watercooling and an in-house backup generator running when you turn them on. With the Cortex-A9 and Cortex-A15 processors in particular, the performance differential between ARM processors and x86 is starting to shrink significantly. We are getting closer to having full 64-bit support in the coming ARMv8 processors, that will still retain the low power and low cost heritage of the ARM processor. Enterprise server manufacturers are already planning to start putting ARM processors into very low-cost, very dense, and very robust systems to provide the kind of functionality, interconnectivity and compute power that used to only be possible in mainframes. Ubuntu Server 12.04 LTS will support ARM, specifically the hard float compilation configuration (armhf). With our pre-releases already receiving such good performance reviews, we are excited about the possibilities. If you want to know more about what we’ve done with ARM for Ubuntu Server, I recommend you start with a great FAQ posted on our wiki.
Traditional license and subscription support models are built for scale-up solutions, not scale-out. These offerings either price by number of users or number of cores per machine, which are within reason when deploying onto a small number of machines, i.e. under 100…maybe a bit higher depending on the size of the organization. The base price gets you access to security updates and bug fixes, and you have to pay more to get more, i.e. someone on the phone, email support, custom fixes, etc. This is still acceptable to most users in a scale-up model.
However, when the solution is scale-out, i.e. 1000s or more, this pricing gets way out of control. Many of the license and subscription vendors have recently wised up to this, and offer cluster-based pricing, which isn’t necessarily cheap, but certainly much less costly than the per socket/CPU/user approach. The idea is that you pay for the master or head node, and then can add as many slave nodes as you want for free.
Ubuntu Server provides security updates and maintenance for the life of the release…for free. That means for an LTS release of Ubuntu Server, users get five years of free maintenance, if you need someone to call or custom solutions, you can pay Canonical for that…but if you don’t…you pay nothing. It doesn’t matter if you have a few machines or over a 1000, security updates and maintenance for the set of supported packages shipped in Ubuntu is free.
Deploying interconnected services across a scale-out deployment is a PITA. After procuring the necessary hardware and finding lab space, you have to physically set them up, install the OS and required applications, and then configure and connect the various applications on each machine to provide the right desired services. Once you’ve deployed the entire solution, upgrading or replacing the service applications, modifying the connections between them, scaling out to account for higher load, and/or writing custom scripts for re-deployment elsewhere requires even more time…and pain.
Juju is our answer to this problem. It focuses on managing the services you need to deliver a single solution, above simply configuring the machines or cloud instances needed to run them. It was specifically designed, and built from the ground up, for service orchestration. Through the use of charms, Juju provides you with shareable, re-usable, and repeatable expressions of DevOps best practices. You can use them unmodified, or easily change and connect them to fit your needs. Deploying a charm is similar to installing a package on Ubuntu: ask for it and it’s there, remove it and it’s completely gone. We’ve dramatically improved Juju for Ubuntu Server 12.04LTS, from integrating our charm collection into the client (removing the need for bzr branches) to having rolled out a load of new charms for all the services you need…and probably some you didn’t know you wanted. As my good friend Jorge Castro says, the Juju Charm Store Will Change the Way You Use Ubuntu Server.
In terms of deployment, we recognized this hole in our offering last cycle and rolled out Orchestra as first step, to see what the uptake would be. Orchestra wasn’t an actual tool or product, but a meta-package pointing to existing technologies like cobbler, already in our archive. We simply ensured the tools we recommended worked, so that in 11.10 you can deploy Ubuntu Server across a cluster of machines easily.
After 11.10 released, we realized we could extend the idea from simple, multi-node OS install and deployment, to a more complex offering of multi-node service install and deployment. This effort would require us to do more than just integrate existing projects, so we decided to create our own project called MAAS (metal as a service), which would be tied into Juju, our service orchestration tool.
Ubuntu 12.04 LTS will include Canonical’s MAAS solution, making it trivial to deploy services such as OpenStack, Hadoop, and Cloud Foundry on your servers. Nodes can be allocated directly to a managed service, or simply have Ubuntu installed for manual configuration and setup. MAAS lets you treat farms of servers as a malleable resource for allocation to specific problems, and re-allocation on a dynamic basis. Using a pretty slick user interface, administrators can connect, commission and deploy physical servers in record time, re-allocate nodes between services dynamically, and keep them all up to date and in due course.
There’s a lot more we need to do. What if the MAAS commissioning process included hardware configuration, for example RAID setup and firmware updates? What if you could deploy and orchestrate your services by mouse click or touch…never touching a keyboard? What if your services were allocated to machines based on power footprint? What if your bare metal deployment could also be aware of the Canonical hardware certification database for systems and components, allowing you to quickly identify systems that are fully certified or might have potentially problematic components? What if your services auto-scaled based on load without you having to be involved? What if you could have a true hybrid cloud solution, bursting up to a public cloud(s) of your choosing without ever having to rewrite or rearchitect your services? These types of questions are just some of the challenges we look to take on over the next few releases, and if any of it interests you…I encourage you to please join us.
The title says it all. If I can pick it up in a few days…anybody can . From locally via LXC to an AWS API compatible cloud to even bare metal via Ubuntu Orchestra…Juju makes deploying/controlling/scaling services insanely simple. Over my holiday break I felt the need to create a video homage to show just how easy it is…and so I present the following…I chose to deploy ThinkUp (because it’s awesome) with a fitting music track from the Commordores. Enjoy!
If I find time, I’ll create another with hadoop…music and theme TBD
Over the last few days, I felt the compelling need to explain why I think Ubuntu is the best operating system for the cloud. In my mind, it comes down to three key differentiators that I think benefit both users and the overall advancement of the cloud.
Cloud computing and the technologies surrounding it are advancing at an absolutely incredible pace of innovation. Consider how fast OpenStack has matured in the last year, the recent explosion of Hadoop solutions, and the entire movement around Open Computing. Legacy “enterprise” Linux solutions simply cannot keep up given their existing release processes. Users of the cloud and other scale-out technologies can’t afford to wait years for the next supported release to come, especially when that release is destined to be out-of-date the day of release, due to the slow-moving technology transition model utilized by the distribution provider, i.e. opensource project foo releases at time A, then it gets into the “community” version of the release at time B six or more months later, then it *might* get put into the enterprise version at a much later time (years) C.
If you ask these legacy distributions why they move so slow, they’ll undoubtedly say it’s because they are aligning with the hardware release cycles of most server OEMs, which is absolutely true. This is why I’m so excited by the Open Compute Project and it’s potential to reduce what Andreas “Andy” Bechtolsheim recently called gratuitous differentiation in a keynote discussion at this year’s Open Compute Summit in NYC. In short, most OEMs have traditionally introduced features that are more about customer lock-in, than really answering their customer’s needs, e.g. releasing a new blade, that requires a new bladecenter, that won’t work with the older model nor work in another OEMs bladecenter…or even worse, having special server racks to match their servers, that won’t work with anyone elses…insane! The only benefit I’ve seen from gratuitous server technology differentiation is that it’s probably a big reason why so many businesses have jumped to the cloud…where they don’t have to worry about this stuff anymore. Hopefully, we can avoid having different APIs and custom Linux distributions by each cloud service provider, as I feel these are just more attempts at customer lock-in, and don’t really provide that much value to the users themselves.
Legacy Linux distributions also like to tout their ABI compatibility, that they enforce for the benefit of their customers and ISV partners. The logic is that by guaranteeing ABI at the kernel and plumbing layer throughout a given release and its updates, ISVs and their customers are assured that their applications (assuming they don’t change) will work for the life of the release. Besides again fitting to the slow-paced legacy OEM server release model, this makes perfect sense in a legacy server software world too. An ISV can build a release once, and then issue fixes thereafter, until the next major release in a year or so. As we move toward a faster-paced, continuous integration, scale-out computing world, ABI compatibility becomes more of a hinderance than advantage for users. The rate of innovation is now so fast, that even packaging certain webscale applications is frowned upon by the upstreams that provide them because they don’t want their user’s experience limited to a distributions release cycle. Also, it becomes difficult, to sometimes impossible, for most of the legacy Linux distributions to introduce new hardware architectures, i.e. ARM server support, post release. Server OEMs are forced to either go through the pain of backporting huge amounts of code into a forked kernel (that receives little outside testing), slip out their own hardware roadmaps to match the distribution release cycle, or try to convince (usually with money) the legacy Linux distributor to issue some “special” release to accommodate them.
Ubuntu is free, and Canonical has made the promise that it always will be. By free, we mean no license fees or paid subscriptions to receive updates. Around 10 years ago when the first legacy Linux distributions were coming about, the movement to a subscription-based model was seen as a revolutionary change in the software business. Instead of charging licenses at a per user base, which was the accepted model for operating systems and software as a whole at the time (in addition to support contracts), these companies had the ingenious approach of giving away the software, and creating an updates subscription model. Realizing that software requires updates, and that most (but not all) users will want them, they created a system that allowed them dependable, consistent revenue per installation, while giving customers the freedom to have as many users on the system they needed, as well as machines that simply sit and do their job, never needing an update (think mail or DNS server). Later on, they partnered with server OEMs and brilliantly started to differentiate these subscription costs based on the architectures and cpu cores of the hardware…learning tricks the OEMs had played with their own proprietary operating systems of the day.
The subscription + support model has done well…extremely well over the past decade, but in the cloud…in scale-out computing, the model begins to hurt…extremely in some cases. One of the main benefits of cloud computing is the ability to scale on demand. A given deployment can have a guest instance count in the low 10s for 6 months, but then need to scale out to the 100s or 1000s for another 4, returning back to original levels after peak demand has subsided, e.g. demands on online retail infrastructure increase dramatically during the holidays and then subside soon after. For a subscription-based model, these means customers must budget for an increase in fees to account for the scaling, and if they underestimate, their own profits are impacted because of it. Furthermore, making someone pay for fixes and security updates just seems wrong to me…what if Google or Mozilla started charging people for fixes and security updates for their web browsers…people would lose their minds. Finally, because applications (especially scale-out/webscale ones) are innovating so fast now…adopting new development methodologies like continuous integration, it’s unthinkable that someone would deploy software and never want the updates. Charging someone for fixes and updates is now as archaic as charging them for the number of users.
The service model is the next evolutionary step, away from the subscription model. It recognizes that a Linux distributors real value to the customer is the expertise they have from producing the distribution, having the upstream relationships, and knowing the integrated technologies, inside and out. Thus, the business model is built around the support and services they are able to provide because of their unique position, not the bug fixes and security updates that users should expect to get for the same cost as they received the original software…free.
To the average consumer, I suspect the Ubuntu release cadence is not much more than a nice thing to have. There’s no need to speculate on when the next release is, or what it will have, because we plan transparently. While we always deliver on a 6 month cadence, users aren’t forced to upgrade that often, as we support each release for 18 months…and up to 5 years for the LTS that comes every 2 years. And yet, despite having such a predictable release cycle, we still manage to generate more growing excitement for each one (personally that’s just amazing to me).
Now if you’re someone deploying a private cloud, a solution into a cloud, or even releasing hardware focused at the cloud, the cadence becomes less of a “nice thing” and more of a necessity. Whether your planning a hardware or software release, being able to depend on an operating system release schedule not slipping is a huge benefit and relief. There are enough internal moving parts to any significant software or hardware release project, then add the rapid pace of cloud innovation, and no one wants to then worry that your entire business plan can be jeopardized by the OS vendor slipping out their release schedule…to accommodate a partner, possibly even your direct competitor.
A dependable, transparent release process not only provides peace of mind, it allows for the best possible collaboration. Transparency allows users, partners, and upstreams alike, to observe and influence the direction of each Ubuntu release. There’s no waiting for the first pre-release ISO to see if your feature made it in, or if this next ISO will boots on your new hardware, because you can track every bug and feature work item. As part of our transparent and dependable process, we produce pre-release Ubuntu ISOs and cloud images daily. While each daily isn’t guaranteed to be installable, bootable, or tested to the level of an alpha or beta release, it’s usually good enough to give users and partners something to sniff out and provide feedback on…giving them confidence their cloud solution that depends on our OS won’t be in jeopardy at release. You won’t find this with legacy Linux distributions…not even their closest business partners get this level of access.
As I’ve said in the past, Canonical’s investment in Ubuntu Server is focused on cloud computing. So to be clear – While we have a tremendous community to look after the quality of support for traditional server workloads and a solid inheritance of dependability and stability from Debian, I would be lying to you if I said Ubuntu is the best choice for every type of server deployment. Hell, I challenge anyone to name one operating system that really is. All I’m saying is that Ubuntu is the best operating system for cloud computing….and Canonical will continue to focus our innovation to ensure it stays that way.
Then check out the new status.ubuntu.com website . In particular, you can not only see the usual work item status of the team, as well as individuals, but also progress towards the five areas I’ve called out as important for this cycle:
So I just read Lennart Poettering’s “fair and balanced” review of sysvinit, upstart, and systemd….wow. Looking at his comparison chart, we in Ubuntu must be idiots to not switch over to systemd immediately…especially since he clearly points out all the major distributions have done so (or plan to) already. Once again, the evil Mark Shuttleworth must be dictating that Ubuntu must remain on upstart, oppressively pushing down all those who challenge his rule….whatever people. So here’s the real reason why I think we should remain on upstart in 11.10, it’s because (as Mark mentioned today) we put users first. Do I need to remind anyone of the pain we went through in Karmic (Ubuntu 9.10) when we finally made the wholesale jump to upstart? Sure, we achieved great boot performance gains, but it was painful, especially for Ubuntu Server, as it was largely neglected during that effort (and I’ll take the blame for that). We spent the next release, Lucid, cleaning up behind ourselves….frantically working to get the next LTS in a respectable shape…and still, Ubuntu Server was neglected (again, blame me).
So here we are again, one release before an LTS…an LTS that is not only going to showcase the quality of the Desktop, but is going to be extremely important for Ubuntu Server, and people are asking us to switch to systemd? Really?? We just got done improving upstart, making upstart play nicer with Server, rolled out a damn nice user guide, and even added some slick features (like job and event visualization)…and we’re supposed to throw all that out and switch to systemd now? The situation reminds me of a quote from one of the funniest (and probably worst) presidents in US history:
“There’s an old saying in Tennessee — I know it’s in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can’t get fooled again.” -George Bush
But seriously (with all joking aside), I don’t want to go through a rushed change again, which is why I support staying on upstart for both 11.10 and 12.04 LTS, and then taking a serious look at the merits and drawbacks of moving to systemd going into the 12.10 cycle. Basing a decision on what we feel is important to Ubuntu and it’s users, not Lennart. By then, systemd will have another year to mature, we don’t have an impeding LTS release on our backs, and if Debian is to truly switch to systemd, then a year’s wait while that work goes on should only improve the chances of Ubuntu adopting it.
Lennart, if you by chance read this, can you please stop the campaign and badgering against us…it ultimately does you no good. We aren’t pushing back because we don’t like you, or Fedora, or because Mark is forcing us to stay with upstart…it’s because we put users first. While I agree upstart isn’t perfect, and certainly still causes server sysadmins pain in some situations, I’d rather deal with the problems we have with it , than take a leap of faith with systemd this close to an LTS.
Today I happened to run across the Eucalyptus 3.0 roadmap and was happy to see so many features planned, that have been requested by us and others for awhile now. A lot of the features previously reserved for the commercial (and closed source) Eucalyptus Enterprise Edition are now being opensourced…and this is great! High Availability…User/Group management…LDAP integration…and support for ??Windows Guest Images are just a few. With this announcement added to the current momentum and buzz behind OpenStack (Cactus coming this week…Diablo planning in 2 weeks!!)….and now VMWare has rolled out Cloud Foundry!!! Boy oh boy….it’s looking like we’re in for a very interesting summer.
Jono passed along a few more questions that were in the queue, that we couldn’t get to due to time constraints. So with that said, here they are:
So it’s been a crazy ride this year for me at Canonical. I started out covering for Matt Zimmerman while he took on an internal project, which was an eye-opening adventure, where I learned to greatly appreciate the day-to-day demands the Canonical CTO encounters. During this temporary assignment, I was involved in Canonical’s work with Google on ChromeOS and partnership with ARM to roll out Linaro. We also released another LTS…and had a little “excitement” the day of the 10.04LTS release .
Soon after Matt’s return, I created a little video for 10.10 and resumed my duties leading the Foundations and Security team…but then had to jump into Ubuntu 10.10 Release Manager duties, as our esteemed colleague and close friend, Steve Langasek, took on another opportunity with Linaro. During my stint as Ubuntu Release Manager, I had a chance to tweak the yearly release schedule and do a “once in a lifetime” (according to some) release. It was a fun ride, but I’m glad the role is now in safer hands.
After 10.10, I was actually looking forward to life returning to “normal”, but as fate would have it (I know…cliche)…it was time for more excitement. Shortly after the 10.10 release, our Canonical Server team manager, Jos Boumans, decided to take on a new challenge outside of Canonical. Shortly thereafter, the Ubuntu Server technical lead, Thierry Carrez and the team’s most senior engineer, Mathias Gug, decided to make the ultimate Ubuntu Server developer community contribution….by joining it and moving on to new adventures . As you might imagine, while the Ubuntu Server team wished all three great luck, it left them in a bit of a bind…not too mention a little in the dumps motivationally. With me being familiar with the team and having a server background, it only made sense that I cover until we found a full-time manager for the team.
There I was…managing Foundations, Security, and Server…which really meant I knew very little about a whole lot, as it is simply too many features, people, and subject matter to have a firm grasp on. Because of this, and my constant need to “raise my game”, I decided to apply for the Ubuntu Server manager role full-time….and lucky for me, I got it! Now, this didn’t immediately alleviate the management hat trick I was pulling, but allowed Rick and I to move forward with posting for a backfill, as we know this can’t go on for long…it has to be resolved “quickly” (I know…but I just couldn’t resist).
So here I am, the official manager of the Canonical Ubuntu Server team (and acting manager of Foundations and Security)…..wow….up ’til now, I’ve been pretty client focused…..now I have to switch gears to the server workspace?…..backfill two positions?…..figure out our cloud infrastructure stack?….hell, figure out cloud!……what the %$#! did I just get myself into!!!!
Then I realized what a waste of time that would be, and even more importantly, just stupid. I don’t even feel like these server distributions are “competition”, hell…they’re allies. Furthermore, I have more important uses of my time than trying to make Ubuntu Server a clone of any one of them. This includes Debian, whom we owe our existence to and respect enough that I want to make damn sure we don’t simply ship out the same product, but with a slightly different installer or a handful of additional patches. If folks want Debian, then by all means…use Debian…it is as rock solid as they come.
I want folks using Ubuntu on their server for the same reasons people use Ubuntu on their desktops, laptops, and netbooks….because it’s easy to use, easy to install, fast, technically innovative, up-to-date with the latest hardware support, and backed by one of the best opensource communities in existence….and of course free .
With all that said, I’m not naive enough to think that desktop users = server users, in terms of what they want/need. I realize most sysadmins are risk averse, cringe at the thought of upgrading their OS every 6 months, and couldn’t care less about how visually stunning their boot sequence is. However, they do want fast start-up times (especially for cloud instances, where time is money)…quick, easy, and scalable installations….and support for the just released RAID or SAN storage card adapter they just installed. So I don’t want to duplicate everything we’ve done with Ubuntu Desktop, but I believe we can improve Ubuntu Server based on the same ideals and concepts that make Ubuntu Desktop such a success. I’m also not suggesting we stop releasing Ubuntu Server every 6 months, or treat non-LTS releases as unimportant…but we should consider each LTS as a perfectly integrated set of features that we’ve delivered throughout the previous three releases.
“Okay Robbie…sounds good….but I thought Ubuntu Server is now targeted for the cloud….are you leaving us bare-metal folks behind?” ABSOLUTELY NOT. Make no mistake, I want Ubuntu Server to be the best operating system for the cloud (period). The success of Ubuntu Desktop led to Ubuntu being the most popular OS used in “the cloud” today, and we’d be fools to ignore this. The traditional Linux server landscape has been dominated by RHEL/SLES on licensed-install side, and Debian/CentOS on the no-cost-license side…finding a way to squeeze Ubuntu Server in would be a steep uphill battle. Our success in the cloud is the disruptive force that we need to get us in the game. In my opinion, in order for Ubuntu Server to be the best operating system for the cloud, we have to succeed in two areas:
I strongly believe both efforts will incorporate and require help from our existing Ubuntu Server community….and hopefully grow it.
No matter how widespread cloud becomes, there will always be a need for hardware. This hardware will need an operating system that takes full advantage of it’s features and overlays the necessary software fabric need in a cloud datacenter. If Ubuntu Server is going to be the best cloud hosting operating system, then we have to focus on two things:
We need to run well on hardware targeted for the cloud, e.g. high-volume, low power footprint hardware typically sold at a relatively low price because manufacturers know people need a lot of them. This means:
The other side of the coin is making sure we excel running in the cloud. Ubuntu Server should not only run well in Ubuntu hosted clouds, but in popular cloud datacenters that support Linux instances, like Amazon EC2, Rackspace Cloud, the IBM Cloud, etc. A few examples of how we might do this are:
We need to take a serious look at the Ubuntu Server installer…think about the types of users we not only have now, but also want in the future…and make sure we address everyone’s needs. At the most recent Ubuntu 11.04 UDS we held a session about providing an Install Service, and those in attendance seem to really get behind us doing such a feature. For details, I recommend reading the blueprint and spec, but in short, we would divide the Ubuntu Server install into 2 steps:
The Ubuntu Server team has undertaken a full evaluation of several options for the backend of this Install Service, and has come to some conclusions. We propose that our efforts be focused on improving one of the industry standard packages, namely, Cobbler.
So one key component in any modern system, is participating in a distributed network. The rise of Infrastructure as a Service (IaaS) providers has shown us the power of having provisioning API’s for doing interesting things, like spawning nodes, tearing them down, and reconfiguring them for other purposes.
While the actual hardware may not be as readily available as VM’s, it is no less important that upon arrival, this hardware is easy to provision and integrates with existing systems quickly. A sysadmin’s time is valuable and the less time they spend bootstrapping new machines is more time they can spend elsewhere.
With that in mind, the Ubuntu Server team believe it is key that a provisioning system be built around a web based API. This is why we have selected Cobbler as the provisioning system most likely to improve the Ubuntu admin’s experience.
During the process of evaluation we considered other options:
We also considered adapting our own tools, such as uec-provisioning or cloud-init, but felt our time and resources would be better spent on improving existing solutions that already have the base functionality we need. Cobbler did, at one time, have the full ability to install Ubuntu, but that support has gone unmaintained. Also, there isn’t a current package in Debian or Ubuntu, but some work has already been done to produce a package and it’s not far off from the level that would be needed to add it to Ubuntu. Finally, this was the likely candidate selected in the session at UDS, and gets the most positive feedback in preliminary discussions with Ubuntu Server community members.
We fully expect this effort to span multiple releases, and have a goal of it being done by 11.10 to allow for testing and bug fixing for the 12.04LTS. Our focus for the 11.04 development cycle will be:
I’m really excited about what we can bring to the table over the next few releases. The ideas and plans I just laid out, are just that…my ideas and plans…so don’t be surprised if any of it changes . Ubuntu Server has had the fortunate luxury of “riding on the coat tails?” of Ubuntu Desktop’s success, and while I am thankful for this…I would love to see Ubuntu Server taking “top billing” in the 12.04LTS release. I strongly believe that in order for this to happen, we will need to make some changes, and I can’t promise they’ll be changes everyone will agree with, and that’s fine….communities our size, with such committed members will have disagreements. What I can promise is that we will make them transparently and try our damnedest to consider what’s best for our users. And if/when we slip up, I fully expect our users and our community to keep us honest by calling us out on it….just as you’ve done so well all along.
Some time ago a group of hyper-intelligent pan dimensional beings decided to finally answer the great question of Life, The Universe and Everything. To this end, a small band of these Debians built an incredibly powerful distribution, Ubuntu. After this great computer programme had run (a very quick 3 million minutes…or 6 years) the answer was announced. The Ultimate answer to Life, the Universe and Everything is…42, and in its’ purest form 101010. Which suggests that what you really need to know is ‘What was the Question?’. The great distribution kindly pointed out that what the problem really was that no-one knew the question. Accordingly, the distribution designed a set of successors, marked by a circle of friends…to ultimately bring Unity to all things living…Ubuntu 10.10, to find the question to the ultimate answer.
And with that, the Ubuntu team is pleased to announce Ubuntu 10.10. Codenamed “Maverick Meerkat”, 10.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.
For more, please see the official release announcement.
There has recently been some discussion on the Ubuntu Tech Board mailing list around releasing Ubuntu 10.10 on 10/10/10 and the impact on Ubuntu’s promise to release on a regular six month cadence. In my thinking around this, I decided to take a deeper look at the schedule of past releases…to see how close to a cadence we actually are. Interestingly enough, I found that while the releases were in the same month (and towards the end), the amount of development and bug fix time varied greatly.
The simplistic approach to doing the releases is to break them into equal 26 week cycles:
Looking at last year’s releases, we came close to this: 25 wks for 9.04 and 27wks for 9.10. However, if you take into account the traditional US/European holidays towards the end of the calendar year, I think you get a more realistic view of the schedule….and realize how different the 2 cycles were:
You will notice the weeks I call out as “part-time” development. Anyone who attends UDS knows of the “UDS Hangover”, where we are both recovering from the event and scrambling to finalize our specs. The other dips in development occur during the US Thanksgiving holiday and what is typically called the holiday season in the US and Europe. I realize that Ubuntu has developers worldwide (and this is one of our greatest strengths!), but there is a large portion of key developers who live in these regions, who have families and lives to live and thus go on holiday at this time. I shaded the weeks of UDS and the Christmas holiday darker, as these are the deepest dips in developer productivity. If you’ve developed for Ubuntu, there is now doubt that you’ve felt rushed in the xx.04 release…and this is why. Looking at 10.04 and the projected 10.10 schedule, the same will hold true despite having an equal number of calendar weeks:
If we look further into the releases at 11.04, the trend continues:
I would like it if each cycle had the same amount of development and bug fixing time, to make it easier to plan our workloads and preserve quality from release to release….establishing a true cadence to our process. If we focus the scheduling around development and bug fixing, the 10.10 and 11.04 schedules look like this:
While the overall week counts differ, the development and bug fix periods are the same. If we project this out to the next potential LTS (12.04), we even have some slack for extended bug fixing (i.e. 2 Betas) due to the part-time development weeks:
So let’s return to looking at Maverick under the new focus on development and bug fixing cadences:
Does a move to doing a 10/10/10 release now look so impossible?
“11 weeks of development! and only 7 weeks of bug fixing?!…we can’t do it!!!”, you might say. My response is that we already have:
9.04 had a total of 10 weeks of development (full and part-time) and 7 weeks of testing…and it was a pretty solid release, if I remember correctly. So to steal a familiar phrase, Don’t Panic…a 10.10.10 release will be just fine.
Almost there!!!! http://www.youtube.com/watch?v=xeKKIifkZY
We’re getting closer!!!
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.