Canonical Voices

Posts tagged with 'openstack'

Greg Lutostanski

We (the Canonical OIL dev team) are about to finish the production roll out of our OpenStack Interoperability Lab (OIL). It’s been an awesome time getting here so I thought I would take the opportunity to get everyone familiar, at a high level, with what OIL is and some of the cool technology behind it.

So what is OIL?

For starters, OIL is essentially continuous integration of the entire stack, from hardware preparation, to Operating System deployment, to orchestration of OpenStack and third party software, all while running specific tests at each point in the process. All test results and CI artifacts are centrally stored for analysis and monthly report generation.

Typically, setting up a cloud (particularly OpenStack) for the first time can be frustrating and time consuming. The potential combinations and permutations of hardware/software components and configurations can quickly become mind-numbing. To help ease the process and provide stability across options we sought to develop an interoperability test lab to vet as much of the ecosystem as possible.

To accomplish this we developed a CI process for building and tearing down entire OpenStack deployments in order to validate every step in the process and to make sure it is repeatable. The OIL lab is comprised of a pool of machines (including routers/switches, storage systems, and computer servers) from a large number of partners. We continually pull available nodes from the pool, setup the entire stack, go to town testing, and then tear it all back down again. We do this so many times that we are already deploying around 50 clouds a day and expect to scale this by a factor of 3-4 with our production roll-out. Generally, each cloud is composed of about 5-7 machines each but we have the ability to scale each test as well.

But that’s not all, in addition to testing we also do bug triage, defect analysis and work both internally and with our partners on fixing as many things as we can. All to ensure that deploying OpenStack on Ubuntu is as seamless a process as possible for both users and vendors alike.

Underlying Technology

We didn’t want to reinvent the wheel so, we are leveraging the latest Ubuntu technologies as well as some standard tools to do all of this. In fact the majority of the OIL infrastructure is public code you can get and start playing with right away!

Here is a small list of what we are using for all this CI goodness:

  • MaaS — to do the base OS install
  • Juju — for all the complicated OpenStack setup steps — and linking them together
  • Tempest — the standard test suite that pokes and prods OpenStack to ensure everything is working
  • Machine selections & random config generation code — to make sure we get a good hardware/software cross sections
  • Jenkins — gluing everything together

Using all of this we are able to manage our hardware effectively, and with a similar setup you can easily too. This is just a high-level overview so we will have to leave the in-depth technological discussions for another time.

More to come

We plan on having a few more blog posts cover some of the more interesting aspects (both results we are getting from OIL and some underlying technological discussions).

We are getting very close to OIL’s official debut and are excited to start publishing some really insightful data.

Read more
pitti

Our current autopkgtest machinery uses Jenkins (a private and a public one) and lots of “rsync state files between hosts”, both of which have reached a state where they fall over far too often. It’s flakey, hard to maintain, and hard to extend with new test execution slaves (e. g. for new architectures, or using different test runners). So I’m looking into what it would take to replace this with something robust, modern, and more lightweight.

In our new Continuous Integration world the preferred technologies are RabbitMQ for doing the job distribution (which is delightfully simple to install and use from Python), and OpenStack’s swift for distributed data storage. We have a properly configured swift in our data center, but for local development and experimentation I really just want a dead simple throw-away VM or container which gives me the swift API. swift is quite a bit more complex, and it took me several hours of reading and exercising various tutorials, debugging connection problems, and reading stackexchange to set it up. But now it’s working, and I condensed the whole setup into a single setup-swift.sh shell script.

You can run this in a standard ubuntu container or VM as root:

sudo apt-get install lxc
sudo lxc-create -n swift -t ubuntu -- -r trusty
sudo lxc-start -n swift
# log in as ubuntu/ubuntu, and wget or scp setup-swift.sh
sudo ./setup-swift.sh

Then get swift’s IP from sudo lxc-ls --fancy, install the swift client locally, and talk to it:

$ sudo apt-get install python-swiftclient
$ swift -A http://10.0.3.134:8080/auth/v1.0 -U testproj:testuser -K testpwd stat

Caveat: Don’t use this for any production machine! It’s configured to maximum insecurity, with static passwords and everything.

I realize this is just poor man’s juju, but juju-local is currently not working for me (I only just analyzed that). There is a charm for swift as well, but I haven’t tried that yet. In any case, it’s dead simple now, and maybe useful for someone else.

Read more
Prakash

The company has been advertising to hire an engineering director who will “lead GoDaddy’s internal infrastructure-as-a-service project by adopting and contributing to OpenStack,” according to an ad posted to LinkedIn and the OpenStack Foundation website.

The ad doesn’t offer much more detail and GoDaddy did not reply to a request for comment so it’s hard to know how extensively it plans to use OpenStack. But adopting OpenStack to run internal operations would be in line with recent comments made by the company’s CIO, who told a publication called Business Cloud News just last week that the company is planning a big internal shift to the cloud and will use open source software to execute this vision.

Read More:  http://www.itworld.com/cloud-computing/401451/godaddy-goes-openstack

Read more
Prakash

PayPal has spoken publicly and regularly about its private OpenStack implementation and recently said that 20 percent of its infrastructure runs on OpenStack.

But it’s only a matter of time before PayPal starts running some of its operations on public clouds, said James Barrese, CTO of PayPal.

“We have a few small apps that aren’t financial related where we’re doing experiments on the public cloud,” he said. “We’re not using it in a way that’s a seamless hybrid because we’re a financial system and have very stringent security requirements.”

Read More: http://www.itworld.com/cloud-computing/400964/private-cloud-poster-child-paypal-experimenting-public-cloud

Read more
Prakash

Blueshift is the cool speakers that charges in 5 minutes and plays for 6 hours.

Sam Beck is the guy behind Blueshift, an open source sustainable electronics business that is all about building cool stuff. Helium speakers are the company’s first product to market and will be the world’s the first supercapacitor-powered portable speakers. Not to mention the design files are open source.

In this interview, Sam shares with me his unique business mindset and why he’s not afraid anyone will steal his thunder, even while they might have access to his design.

If we build stuff that’s cool enough, we’ll find a way to make money.

Read More: https://opensource.com/life/13/12/interview-blueshift-sam-beck

Read more
Prakash

You gotta love it when one vendor helpfully announces what another vendor’s plans. That’s what apparently happened Monday when Rackspace Chairman and co-founder Graham Weston was quoted in the Wall Street Journal’s CIO blog  saying that Salesforce.com would start running OpenStack’s open-source cloud technology.

Read More: http://gigaom.com/2013/12/17/salesforce-com-will-adopt-openstack-says-rackspace/

Read more
Prakash

OpenStack, a non-profit organization promoting open source cloud computing software, wants to increase its presence in India.

The organization has formed a three -pronged strategy—launching new products and features, tapping organizations deploying cloud computing, and training the vast channel base of its alliance partners who have a strong presence in the country.

Mark Collier, COO, OpenStack, affirmed, “After the US, India and China are the most important countries for us. We will target the large organizations that are either in the process of deploying, or have a cloud computing strategy in place. And cloud computing requires a lot of business transformation because of the cultural shift and dramatic changes in processes.”

 

Read More: http://www.crn.in/news/software/2013/11/15/openstack-keen-on-indian-market

Read more
Mark Baker

To paraphrase from Mark Shuttleworth’s keynote at the OpenStack Developer Summit last week in Hong Kong, building clouds is no longer exciting. It’s easy. That’s somewhat of an exaggeration, of course, as clouds are still a big choice for many enterprises, but there is still a lot of truth in Mark’s sentiment. The really interesting part about the cloud now is what you actually do with it, how you integrate it with existing systems, and how powerful it can be.

OpenStack has progressed tremendously in its first few years, and Ubuntu’s goal has been to show that it is just as stable, production-ready, easy-to-deploy and manage as any other cloud infrastructure. For our part, we feel we’ve done a good job, and the numbers certainly seem to support that. More than 3,000 people from 50 countries and 480 cities attended the OpenStack Summit in Hong Kong, a new record for the conference, and a recent IDG Connect survey found that 84 percent of enterprises plan to make OpenStack part of their future clouds.

Clearly OpenStack has proven itself. And, now, the OpenStack community’s aim is making it work even better with more technologies, more players and more platforms to do more complex things more easily. These themes were evident from a number of influential contributors at the event and require an increased focus amongst the OpenStack community:

Global Collaboration

OpenStack’s collaborative roots were exemplified early on with the opening address by Daniel Lai, Hong Kong’s CIO, who talked about how global the initially U.S.-founded project has become. There are now developers in more than 400 cities around the world with the highest concentration of developers located in Beijing.

Focus on the Core

One of the first to directly hit on the theme of needing more collaboration, though, was Mark Shuttleworth with a quote from Albert Einstein: “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.” OpenStack has grown fantastically, but we do, as a community, need to ensure we can support that growth rate. OpenStack should focus on the core services and beyond that, provide a mechanism to let many additional technologies plug in, or “let a thousand flowers bloom,” as Mark eloquently put it.

HP’s Monty Taylor also called for more collaboration between all of OpenStack’s players to really continue enhancing the core structure and principle of OpenStack. As he put it, “If your amazing plug-in works but the OpenStack core doesn’t, your plug-in is sitting on a pile of mud.” A bit blunt, but it gets to the point of needing to make sure that the core benefits of OpenStack – that an open and interoperable cloud is the only cloud for the future – are upheld.

Greasing the Wheels of Interoperability

And, that theme of interoperability was at the core of one of Ubuntu’s own announcements at the Hong Kong summit: the Ubuntu OpenStack Interoperability Lab, or Ubuntu OIL. Ubuntu has always been about giving companies choice, especially in the cloud. Our contributions to OpenStack so far have included new hypervisors, SDN stacks and the ability to run different workloads on multiple clouds.

We’ve also introduced Juju, which is one step up from a traditional configuration management tool and is able to distil functions into groups – we call them Charms – for rapid deployment of complex infrastructures and services.

Will all the new capabilities being added to OpenStack, Ubuntu OIL will test all of these options, and other non-OpenStack-centric technologies, to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments.

Collaboration and interoperability testing like this will help ensure OpenStack only becomes easier to use for enterprises, and, thus, more enticing to adopt.

For more information on Ubuntu OIL, or to suggest components for testing in the lab, email us at oil@ubuntu.com or visit http://www.ubuntu.com/cloud/ecosystem/ubuntu-oil

Read more
Prakash

  • US Number 1 Country, India Number 2!
  • Ubuntu No 1 OS.
  • KVM Number 1 Hypervisor.

Read more
Prakash

Netflix has developed s Asgard, a web interface that lets engineers and developers manage their AWS infrastructure using a GUI rather than a command line.

Netflix Asgard is open source.

Paypal a big user of OpenStack has ported Asgard to OpenStack.

Read More: http://gigaom.com/2013/10/02/paypal-has-rebuilt-netflixs-cloud-management-system-for-openstack/

Read more
Mark Baker

When it comes to using Linux on an enterprise server, Ubuntu is generally seen as the new challenger in a market dominated by established vendors specifically targeting enterprises. However, we are seeing signs that this is changing. The W3Techs data showing Ubuntu’s continued growth as a platform for online scale-out infrastructure is becoming well known, but a more recent highlight is a review published by Network World of five commercial Linux-based servers (note registration required to read the whole article).

The title of the review “Ubuntu impresses in Linux enterprise test” is encouraging right from the start, but what may surprise some readers are the areas in which the reviewers rated Ubuntu highly:

 

1. Transparency (Free and commercially supported versions are the same.)

This has long been a key part of Ubuntu and we are pleased that its value is gaining broader recognition. From an end user perspective this model has many benefits, primarily the zero migration cost of moving between an unsupported environment (say, in development) and a supported one (in production). With many organisations moving towards models of continuous deployment this can be extremely valuable.

2. Management tools

The reviewers seemed particularly impressed with the management tools that come with Ubuntu, supported with Ubuntu Advantage: Metal as a Service (MAAS), for rapid bare metal provisioning; Juju for service deployment and orchestration; and Landscape for monitoring, security and maintenance management. At Canonical we have invested significantly in these tools over the last few years, so it is good to know that the results have been well received.

Landscape Cloud Support

Landscape Cloud Support

3. Cloud capability

The availability of cloud images that run on public clouds is called out as being valuable, as is the inclusion of OpenStack to be able to create an OpenStack Cloud. Cloud has been a key part of Ubuntu’s focus since 2008, when we started to create and publish images onto EC2. With the huge growth of Amazon and the more recent rapid adoption of OpenStack, having cloud support baked into Ubuntu and instantly available to end users is valuable.

4. Virtualisation support

It is sometimes thought that Ubuntu is not a great virtualisation platform, mainly because it is not really marketed as being one. The reality, as recognised by the Network World reviewers, is that Ubuntu has great hypervisor support. Like some other vendors we default to KVM for general server virtualisation, but when it comes to hypervisor support for Infrastructure as a Service (IaaS), Ubuntu is far more hypervisor agnostic than many others, supporting not only KVM, but VMware ESXi, and Xen as well. Choice is a good thing.

Of course there are areas of Ubuntu that the reviewers believed to be weak – installation being the primary one. We’ll take this onboard and are confident that future releases will deliver an improved installation experience. There are areas that you could suggest are important to an enterprise that are not covered in the review – commercial application support being one – but the fact remains that viewed as a platform in its own right, with a vast array of open source applications available via Juju, Ubuntu seems to be on the right path. If it continues this way, soon it could well cease to be the challenger and become the leader.

Read more
Robbie

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100′s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50 :-).

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ’juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware. ;)


Read more
Prakash

Read more
Mark Baker

Juju, the leading tool for continuous deployment, continuous integration (CI/CD), and cloud-neutral orchestration, now has a refreshed GUI with smoother workflows for integration professionals spinning up many services across clouds like Amazon EC2 and a range of public OpenStack providers. The new GUI speeds up service design – conceptual modelling of service relationships – as well as actual deployment, providing a visual map of the relationships between services.

“The GUI is now a first-class part of the Juju experience” said Gary Poster, whose team lead the work, “with an emphasis on rapid access to the collection of service charms and better visualisation of the deployment in question”. In this milestone the Juju GUI can act as a whiteboard, so a user can mock up the service orchestration they intend to create using the same Juju GUI that they will use to manage their real, live deployments. Users can experience the new interface for themselves at jujucharms.com with no need to setup software in advance.

Juju is used by organisations that are constantly deploying and redeploying collections of services. Companies focused on media, professional services, and systems integration are the heaviest users, who benefit from having repeatable best-practice deployments across a range of cloud environments.

Juju uniquely enables the reuse of shared components called ‘charms’ for common parts of a complex service. A large portfolio of existing open source components is available from a public Charm collection, and browsing that collection is built into the new GUI. Charms are easy to find and review in the GUI, with full documentation instantly accessible. Featured, recommended and popular charms are highlighted for easy discovery. Each Charm now has more detailed information including test results from all supported providers, download count, related Charms, and a Charm code quality rating. The Charm collection includes both certified, supported Charms, and a wider range of ad-hoc Charms that are published by a large community of contributors.

The simple browser-based interface makes it easy to find reusable open source charms that define popular services like Hadoop, Storm, Ceph, OpenStack, MySQL, RabbitMQ, MongoDB, Cassandra, Mediawiki and WordPress. Information about each service, such as configuration options, is immediately available, and the charms can then be dragged and dropped directly on a canvas where they can be connected to other services, deployed and scaled. It’s also possible to export these service topologies into a human-readable and -editable format that can be shared within a team or published as a reference architecture for that deployment.

Recent additions to the public Charm collection include OpenVPN AS, Liferay, Storm and Varnish. For developers the new GUI and Charm Browser mean that their Charms are now much more discoverable. For those taking part in the Charm Championship, it’s easier to upload their Charms and use the GUI to connect them into a full solution for entry into the competition. Submit your best Charmed solution for the possibility of winning $10,000.

The management interface for Charm authors has also been enhanced and is available at  http://manage.jujucharms.com/ immediately.

See how you can use Juju to deploy OpenStack:

The current version of Juju supports Amazon EC2, HP Cloud and many other OpenStack clouds, as well as in-memory deployment for test and dev scenarios. Juju is on track for a 1.12 release in time for Ubuntu 13.10 that will enhance scalability for very large deployments, and a 2.0 release in time for Ubuntu 14.04 LTS.

See it demoed: We’ll be showing off the new Juju GUI and charm browser at OSCON on Tuesday 23rd at 9:00AM in the Service Orchestration In the Cloud with Juju workshop.

Read more
Mark Baker

We are pleased to announce a seriously good addition to the our product team: Ratnadeep (Deep) Bhattacharjee. Deep joins Canonical as Director of Cloud Product Management from VMware where he led its Cloud Infrastructure Platform effort and has a solid understanding of customer needs as they continue to move to virtual and cloud infrastructure.

Ubuntu has fast become the operating system of choice for cloud computing and Ubuntu is the most popular platform for OpenStack. With Deep’s direction, we plan to continue to lead Ubuntu OpenStack into enterprises, carriers and service providers looking for new ways to deliver next generation infrastructure without the ‘enterprise’ price tag and lock in. He will also be key in building out our great integration story with VMWare to help customers who will run heterogeneous environments. Welcome Deep!

Read more
Mark Baker

In April at the OpenStack Summit, Canonical founder Mark Shuttleworth quipped “My OpenStack, how you’ve grown” as a reference to the thousands of people in the room. OpenStack is indeed growing up and it seems incredible that this Friday, we celebrate OpenStacks’ 3rd Birthday.

Incredible – it seems like only yesterday OpenStack was a twinkle in the eyes of a few engineers getting together in Austin. Incredible that OpenStack has come so far in such a short time. Ubuntu has been with OpenStack every day of the 3 year journey so far which is why the majority of OpenStack clouds are built on Ubuntu Server and Ubuntu OpenStack continues to be one of the most popular OpenStack distributions available.

It is also why we are proud to host the London OpenStack 3rd Birthday Party at our HQ in London. We’d love to see you using OpenStack with Ubuntu and even if you don’t, you should come and celebrate OpenStack with on Friday, July 19th.

http://www.meetup.com/Openstack-London/

Read more
Mark Baker

Ubuntu developer contest offers $10,000 for the most innovative charms

Developers around the world are already saving time and money thanks to Juju, and now they have the opportunity to win money too. Today marks the opening of the Juju Charm Championship, in which developers can reap big rewards for getting creative with Juju charms.

If you haven’t met Juju yet, now’s the ideal time to dive in. Juju is a service orchestration tool, a simple way to build entire cloud environments, deploy scale and manage complex workloads using only a few commands. It takes all the knowledge of an application and wraps it up into a re-usable Juju charm, ready to be quickly deployed anywhere. And you can modify and combine charms to create a custom deployment that meets your needs.

Juju is a powerful tool, and its flexibility means it’s capable of things we haven’t even imagined yet. So we’re kicking off the Charm Championship to discover what happens when the best developers bring Juju into their clouds — with big rewards on offer.

The prizes

As well as showing off the best achievements to the community, our panel of judges will award $10,000 cash prizes to the best charmed solutions in a range of categories.

That’s not all. Qualifying participants will be eligible for a joint marketing programme with Canonical, including featured application slots on ubuntu.com,  joint webinars and more. Win the Charm Championship and your app will reach a whole new audience.

Get started today

If you’re a Juju wizard, we want to see what magic you’re already creating. If you’re not, now’s a great time to start — it only takes five minutes to get going with Juju.

The Charm Championship runs until 1 October 2013, and it’s open to individuals, teams, companies and organisations. For more details and full com

petition rules, visit the Charm Championship page.

Charm Championship page

Read more
Mark Baker

“May you live in interesting times.” This Chinese proverb probably resonates well with teams running OpenStack in production over the last 18 months. But, at the OpenStack Summit in Portland, Ubuntu and Canonical founder Mark Shuttleworth demonstrated that life is going to get much less ‘interesting’ for people running OpenStack and that is a good thing.

OpenStack has come a long way in a short time. The OpenStack Summit event in April attracted 3000 attendees with pretty much every significant technology company represented.

Only 12 months ago, being able to install OpenStack in under a few hours was deemed to be an extraordinary feat. Since then deployment tools such as Juju have simplified the process and today very large companies such as AT&T, HP and Deutsche Telekom have been able to rapidly push OpenStack Clouds into production. This means the community has had to look into solving the next wave of problems – managing the cloud in production, upgrading OpenStack, upgrading the underlying infrastructure and applying security fixes – all without disrupting services running in the cloud.

With the majority of OpenStack clouds running on Ubuntu, Canonical has been uniquely positioned to work on this. We have spent 18 months building out Juju and Landscape, our service orchestration and systems management tools to solve these problems, and at the Summit, Mark Shuttleworth demonstrated just how far they have come. During a 30 min session, Mark performed kernel upgrades on a live running system without service interruption. He talked about the integrations and partnerships in place with VMWare, Microsoft and Inktank that mean these technologies can be incorporated into an OpenStack Cloud on Ubuntu with ease. This is is the kind of practicality that OpenStack users need and represents how OpenStack is growing up. It also makes OpenStack less “interesting” and far more adoptable by a typical user which is what OpenStack needs in order to continue its incredible growth. We at Canonical aim to be with it every step of the way.

Read more
roaksoax

For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.

What is MAAS?

I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.

http://youtu.be/J1XH0SQARgo

 

Components and Architecture

MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.

MAAS Architectures

Region Controller

The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.

Cluster Controller

The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).

The Architecture

As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers. Each Cluster Controller has to register with the Region Controller, and each can be configured to manage a different Network. The way has this is intended to work is that each Cluster Controller will manage a different pool of machines in different networks (for provisioning), allowing MAAS to manage hundreds of machines. This is completely transparent to users because MAAS makes the machines available to them as a single pool of machines, which can all be used for deploying/orchestrating your services with juju.

How Does It Work?

MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:

MAAS Process

Enlistment

The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.

Commissioning

The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.

Deployment

Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.

Releasing Machines

Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.

But… How do Machines Turn On/Off?

Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.

Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.

Read more
Mark Baker

If you are interested in either OpenStack or MySQL (or both) then you need to know about 2 meetups running the evening of May 23rd in London.

The London OpenStack meetup.

This is the 3rd meeting to take place and promises to be a good one with 3 talks planned so far:

* Software defined networking and OpenStack – VMWare Nicera’s Andrew Kennedy
* OpenStack Summit Overview – Rackspace’s Kevin Jackson
* An introduction to the Heat API – Red Hat’s Steven Hardy

For a 4th talk we are looking at a customer example – watch this space.

To come along please register at:

http://www.meetup.com/Openstack-London/

The MySQL Meetup.

This group hasn’t met for quite some time but MySQL remains as popular as ever and new developments with MariaDB mean the group has plenty to catch up on. There 2 talks planned so far:

* HP’s database as a service – HP’s Andrew Hutching

* ‘Whatever he wants to talk about’ – MySQL and MariaDB founder Monty Widenius.

 

With David Axmark also in attendance it could be one of the most significant MySQL meetings in London ever. Not one to miss if you are interested in MySQL, MariaDB or related technologies

MySQL meetups are managed in Facebook – please register to attend here:

http://www.meetup.com/The-London-MySQL-Meetup-Group/events/110243482/

 

Of course given the events are running in rooms next to each other you are welcome to register for both and switch between them based on the schedule. We hope to see you there!

Read more