Canonical Voices

Posts tagged with 'openstack'

Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Prakash Advani

The company has pledged to invest $1 billion in open cloud products and services over the next two years, along with community-driven, open-source cloud technologies.

“Just as the community spread the adoption of Linux in the enterprise, we believe OpenStack will do the same for the cloud,” said Hewlett-Packard CEO and President Meg Whitman, in a webcast announcing Helion Tuesday.

Read More

Read more
Dustin Kirkland

Click and drag to rotate, zoom with middle mouse button

It was September of 2009.  I answered a couple of gimme trivia questions and dropped my business card into a hat at a Linux conference in Portland, Oregon.  A few hours later, I received an email...I had just "won" a developer edition HTC Dream -- the Android G1.  I was quite anxious to have a hardware platform where I could experiment with Android.  I had, of course, already downloaded the SDK, compiled Android from scratch, and fiddled with it in an emulator.  But that experience fell far short of Android running on real hardware.  Until the G1.  The G1 was the first device to truly showcase the power and potential of the Android operating system.

And with that context, we are delighted to introduce the Orange Box!


The Orange Box


Conceived by Canonical and custom built by TranquilPC, the Orange Box is a 10-node cluster computer, that fits in a suitcase.

Ubuntu, MAAS, Juju, Landscape, OpenStack, Hadoop, CloudFoundry, and more!

The Orange Box provides a spectacular development platform, showcasing in mere minutes the power of hardware provisioning and service orchestration with Ubuntu, MAAS, Juju, and Landscape.  OpenStack, Hadoop, CloudFoundry, and hundreds of other workloads deploy in minutes, to real hardware -- not just instances in AWS!  It also makes one hell of a Steam server -- there's a charm for that ;-)


OpenStack deployed by Juju, takes merely 6 minutes on an Orange Box

Most developers here certainly recognize the term "SDK", or "Software Development Kit"...  You can think of the Orange Box as a "HDK", or "Hardware Development Kit".  Pair an Orange Box with MAAS and Juju, and you have yourself a compact cloud.  Or a portable big data number cruncher.  Or a lightweight cluster computer.


The underside of an Orange Box, with its cover off


Want to get your hands on one?

Drop us a line, and we'd be delighted to hand-deliver an Orange Box to your office, and conduct 2 full days of technical training, covering MAAS, Juju, Landscape, and OpenStack.  The box is yours for 2 weeks, as you experiment with the industry leading Ubuntu ecosystem of cloud technologies at your own pace and with your own workloads.  We'll show back up, a couple of weeks later, to review what you learned and discuss scaling these tools up, into your own data center, on your own enterprise hardware.  (And if you want your very own Orange Box to keep, you can order one from our friends at TranquilPC.)


Manufacturers of the Orange Box

Gear head like me?  Interested in the technical specs?


Remember those posts late last year about Intel NUCs?  Someone took notice, and we set out to build this ;-)


Each Orange Box chassis contains:
  • 10x Intel NUCs
  • All 10x Intel NUCs contain
    • Intel HD Graphics 4000 GPU
    • 16GB of DDR3 RAM
    • 120GB SSD root disk
    • Intel Gigabit ethernet
  • D-Link DGS-1100-16 managed gigabit switch with 802.1q VLAN support
    • All 10 nodes are internally connected to this gigabit switch
  • 100-240V AC/DC power supply
    • Adapter supplied for US, UK, and EU plug types
    • 19V DC power supplied to each NUC
    • 5V DC power supplied to internal network switch


Intel NUC D53427RKE board

That's basically an Amazon EC2 m3.xlarge ;-)

The first node, node0, additionally contains:
  • A 2TB Western Digital HDD, preloaded with a full Ubuntu archive mirror
  • USB and HDMI ports are wired and accessible from the rear of the box

Most planes fly in clouds...this cloud flies in planes!


In aggregate, this micro cluster effectively fields 40 cores, 160GB of RAM, 1.2TB of solid state storage, and is connected over an internal gigabit network fabric.  A single fan quietly cools the power supply, while all of the nodes are passively cooled by aluminum heat sinks spanning each side of the chassis. All in a chassis the size of a tower PC!

It fits in a suit case, and can travel anywhere you go.


Pelican iM2875 Storm Case

How are we using them at Canonical?

If you're here at the OpenStack Summit in Atlanta, GA, you'll see at least a dozen Orange Boxes, in our booth, on stage during Mark Shuttleworth's keynote, and in our breakout conference rooms.


Canonical sales engineer, Ameet Paranjape,
demonstrating OpenStack on the Orange Box in the Ubuntu booth
at the OpenStack Summit in Atlanta, GA
We are also launching an update to our OpenStack Jumpstart program, where we'll deliver and Orange Box and 2 full days of training to your team, and leave you the box while you experiment with OpenStack, MAAS, Juju, Hadoop, and more for 2 weeks.  Without disrupting your core network or production data center workloads,  prototype your OpenStack experience within a private sandbox environment. You can experiment with various storage alternatives, practice scaling services, destroy and rebuild the environment repeatedly. Safe. Risk free.


This is Cloud, for the Free Man.

:-Dustin

Read more
Greg Lutostanski

We (the Canonical OIL dev team) are about to finish the production roll out of our OpenStack Interoperability Lab (OIL). It’s been an awesome time getting here so I thought I would take the opportunity to get everyone familiar, at a high level, with what OIL is and some of the cool technology behind it.

So what is OIL?

For starters, OIL is essentially continuous integration of the entire stack, from hardware preparation, to Operating System deployment, to orchestration of OpenStack and third party software, all while running specific tests at each point in the process. All test results and CI artifacts are centrally stored for analysis and monthly report generation.

Typically, setting up a cloud (particularly OpenStack) for the first time can be frustrating and time consuming. The potential combinations and permutations of hardware/software components and configurations can quickly become mind-numbing. To help ease the process and provide stability across options we sought to develop an interoperability test lab to vet as much of the ecosystem as possible.

To accomplish this we developed a CI process for building and tearing down entire OpenStack deployments in order to validate every step in the process and to make sure it is repeatable. The OIL lab is comprised of a pool of machines (including routers/switches, storage systems, and computer servers) from a large number of partners. We continually pull available nodes from the pool, setup the entire stack, go to town testing, and then tear it all back down again. We do this so many times that we are already deploying around 50 clouds a day and expect to scale this by a factor of 3-4 with our production roll-out. Generally, each cloud is composed of about 5-7 machines each but we have the ability to scale each test as well.

But that’s not all, in addition to testing we also do bug triage, defect analysis and work both internally and with our partners on fixing as many things as we can. All to ensure that deploying OpenStack on Ubuntu is as seamless a process as possible for both users and vendors alike.

Underlying Technology

We didn’t want to reinvent the wheel so, we are leveraging the latest Ubuntu technologies as well as some standard tools to do all of this. In fact the majority of the OIL infrastructure is public code you can get and start playing with right away!

Here is a small list of what we are using for all this CI goodness:

  • MaaS — to do the base OS install
  • Juju — for all the complicated OpenStack setup steps — and linking them together
  • Tempest — the standard test suite that pokes and prods OpenStack to ensure everything is working
  • Machine selections & random config generation code — to make sure we get a good hardware/software cross sections
  • Jenkins — gluing everything together

Using all of this we are able to manage our hardware effectively, and with a similar setup you can easily too. This is just a high-level overview so we will have to leave the in-depth technological discussions for another time.

More to come

We plan on having a few more blog posts cover some of the more interesting aspects (both results we are getting from OIL and some underlying technological discussions).

We are getting very close to OIL’s official debut and are excited to start publishing some really insightful data.

Read more
pitti

Our current autopkgtest machinery uses Jenkins (a private and a public one) and lots of “rsync state files between hosts”, both of which have reached a state where they fall over far too often. It’s flakey, hard to maintain, and hard to extend with new test execution slaves (e. g. for new architectures, or using different test runners). So I’m looking into what it would take to replace this with something robust, modern, and more lightweight.

In our new Continuous Integration world the preferred technologies are RabbitMQ for doing the job distribution (which is delightfully simple to install and use from Python), and OpenStack’s swift for distributed data storage. We have a properly configured swift in our data center, but for local development and experimentation I really just want a dead simple throw-away VM or container which gives me the swift API. swift is quite a bit more complex, and it took me several hours of reading and exercising various tutorials, debugging connection problems, and reading stackexchange to set it up. But now it’s working, and I condensed the whole setup into a single setup-swift.sh shell script.

You can run this in a standard ubuntu container or VM as root:

sudo apt-get install lxc
sudo lxc-create -n swift -t ubuntu -- -r trusty
sudo lxc-start -n swift
# log in as ubuntu/ubuntu, and wget or scp setup-swift.sh
sudo ./setup-swift.sh

Then get swift’s IP from sudo lxc-ls --fancy, install the swift client locally, and talk to it:

$ sudo apt-get install python-swiftclient
$ swift -A http://10.0.3.134:8080/auth/v1.0 -U testproj:testuser -K testpwd stat

Caveat: Don’t use this for any production machine! It’s configured to maximum insecurity, with static passwords and everything.

I realize this is just poor man’s juju, but juju-local is currently not working for me (I only just analyzed that). There is a charm for swift as well, but I haven’t tried that yet. In any case, it’s dead simple now, and maybe useful for someone else.

Read more
Prakash

The company has been advertising to hire an engineering director who will “lead GoDaddy’s internal infrastructure-as-a-service project by adopting and contributing to OpenStack,” according to an ad posted to LinkedIn and the OpenStack Foundation website.

The ad doesn’t offer much more detail and GoDaddy did not reply to a request for comment so it’s hard to know how extensively it plans to use OpenStack. But adopting OpenStack to run internal operations would be in line with recent comments made by the company’s CIO, who told a publication called Business Cloud News just last week that the company is planning a big internal shift to the cloud and will use open source software to execute this vision.

Read More:  http://www.itworld.com/cloud-computing/401451/godaddy-goes-openstack

Read more
Prakash

PayPal has spoken publicly and regularly about its private OpenStack implementation and recently said that 20 percent of its infrastructure runs on OpenStack.

But it’s only a matter of time before PayPal starts running some of its operations on public clouds, said James Barrese, CTO of PayPal.

“We have a few small apps that aren’t financial related where we’re doing experiments on the public cloud,” he said. “We’re not using it in a way that’s a seamless hybrid because we’re a financial system and have very stringent security requirements.”

Read More: http://www.itworld.com/cloud-computing/400964/private-cloud-poster-child-paypal-experimenting-public-cloud

Read more
Prakash

Blueshift is the cool speakers that charges in 5 minutes and plays for 6 hours.

Sam Beck is the guy behind Blueshift, an open source sustainable electronics business that is all about building cool stuff. Helium speakers are the company’s first product to market and will be the world’s the first supercapacitor-powered portable speakers. Not to mention the design files are open source.

In this interview, Sam shares with me his unique business mindset and why he’s not afraid anyone will steal his thunder, even while they might have access to his design.

If we build stuff that’s cool enough, we’ll find a way to make money.

Read More: https://opensource.com/life/13/12/interview-blueshift-sam-beck

Read more
Prakash

You gotta love it when one vendor helpfully announces what another vendor’s plans. That’s what apparently happened Monday when Rackspace Chairman and co-founder Graham Weston was quoted in the Wall Street Journal’s CIO blog  saying that Salesforce.com would start running OpenStack’s open-source cloud technology.

Read More: http://gigaom.com/2013/12/17/salesforce-com-will-adopt-openstack-says-rackspace/

Read more
Prakash

OpenStack, a non-profit organization promoting open source cloud computing software, wants to increase its presence in India.

The organization has formed a three -pronged strategy—launching new products and features, tapping organizations deploying cloud computing, and training the vast channel base of its alliance partners who have a strong presence in the country.

Mark Collier, COO, OpenStack, affirmed, “After the US, India and China are the most important countries for us. We will target the large organizations that are either in the process of deploying, or have a cloud computing strategy in place. And cloud computing requires a lot of business transformation because of the cultural shift and dramatic changes in processes.”

 

Read More: http://www.crn.in/news/software/2013/11/15/openstack-keen-on-indian-market

Read more
Dustin Kirkland

Last week, I posed a question on Google+, looking for suggestions on a minimal physical format, x86 machine.  I was looking for something like a Raspberry Pi (of which I already have one), but really it had to be x86.

I was aware of a few options out there, but I was very fortunately introduced to one spectacular little box...the Intel NUC!

The unboxing experience is nothing short of pure marketing genius!



The "NUC" stands for Intel's Next Unit of Computing.  It's a compact little device, that ships barebones.  You need to add DDR3 memory (up to 16GB), an mSATA hard drive (if you want to boot locally), and an mSATA WiFi card (if you want wireless networking).

The physical form factor of all models is identical:

  • 4.6" x 4.4" x 1.6"
  • 11.7cm x 11.2cm x 4.1cm

There are 3 different processor options:


And there are three different peripheral setups:

  • HDMI 1.4a (x2) + USB 2.0 (x3) + Gigabit ethernet
  • HDMI 1.4a (x1) + Thunderbolt supporting DisplayPort 1.1a (x1) + USB 2.0 (x3)
  • HDMI 1.4a (x1) + Mini DisplayPort 1.1a (x2) + USB 2.0 (x2); USB 3.0 (x1)
I ended up buying 3 of these last week, and reworked my audio/video and baby monitoring setup in the house last week.  I bought 2 of these (i3 + Ethernet) , and 1 of these (i3 + Thunderbolt)

Quite simply, I couldn't be happier with these little devices!

I used one of these to replace the dedicated audio/video PC (an x201 Thinkpad) hooked up in my theater.  The x201 was a beefy machine, with plenty of CPU and video capability.  But it was pretty bulky, rather noisy, and drew too much power.

And the other two are Baby-buntu baby monitors, as previously blogged here, replacing a real piece-of-crap Lenovo Q100 (Atom + SiS307DV and all the horror maligned with that sick chip set).

All 3 are now running Ubuntu 13.10, spectacularly I might add!  All of the hardware cooperated perfectly.




Here are the two views that I really wanted Amazon to show me, as I was buying the device...what the inside looks like!  You can see two mSATA ports and red/black WiFi antenna leads on the left, and two DDR3 slots on the right.


On the left, you can now see a 24GB mSATA SSD, and beneath it (not visible) is an Intel Centrino Advanced-N 6235 WiFi adapter.  On the right, I have two 8GB DDR3 memory modules.

Note, to get wireless working properly I did have to:

echo "options iwlwifi 11n_disable=1" | sudo tee -a /etc/modprobe.d/iwlwifi.conf


The BIOS is really super fancy :-)  There's a mouse and everything.  I made a few minor tweaks, to the boot order, assigned 512MB of memory to the display adapter, and configured it to power itself back on at any power loss.


Speaking of power, it sustains about 10 watts of power, at idle, which costs me about $11/year in electricity.


Some of you might be interested in some rough disk IO statistics...

kirkland@living:~⟫ sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 11306 MB in 2.00 seconds = 5657.65 MB/sec
Timing buffered disk reads: 1478 MB in 3.00 seconds = 492.32 MB/sec

And the lshw output...

    description: Desktop Computer
product: (To be filled by O.E.M.)
width: 64 bits
capabilities: smbios-2.7 dmi-2.7 vsyscall32
configuration: boot=normal chassis=desktop family=To be filled by O.E.M. sku=To be filled by O.E.M. uuid=[redacted]
*-core
description: Motherboard
product: D33217CK
vendor: Intel Corporation
physical id: 0
version: G76541-300
serial: [redacted]
*-firmware
description: BIOS
vendor: Intel Corp.
physical id: 0
version: GKPPT10H.86A.0025.2012.1011.1534
date: 10/11/2012
size: 64KiB
capacity: 6336KiB
capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int14serial int17printer acpi usb biosbootspecification uefi
*-cache:0
width: 32 bits
clock: 66MHz
capabilities: storage msi pm ahci_1.0 bus_master cap_list
configuration: driver=ahci latency=0
resources: irq:40 ioport:f0b0(size=8) ioport:f0a0(size=4) ioport:f090(size=8) ioport:f080(size=4) ioport:f060(size=32) memory:f6906000-f69067ff
*-serial UNCLAIMED
description: SMBus
product: 7 Series/C210 Series Chipset Family SMBus Controller
vendor: Intel Corporation
physical id: 1f.3
bus info: pci@0000:00:1f.3
version: 04
width: 64 bits
clock: 33MHz
configuration: latency=0
resources: memory:f6905000-f69050ff ioport:f040(size=32)
*-scsi
physical id: 1
logical name: scsi0
capabilities: emulated
*-disk
description: ATA Disk
product: BP4 mSATA SSD
physical id: 0.0.0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: S8FM
serial: [redacted]
size: 29GiB (32GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=be0ab026-45c1-4bd5-a023-1182fe75194e sectorsize=512
*-volume:0
description: Windows FAT volume
vendor: mkdosfs
physical id: 1
bus info: scsi@0:0.0.0,1
logical name: /dev/sda1
logical name: /boot/efi
version: FAT32
serial: 2252-bc3f
size: 486MiB
capacity: 486MiB
capabilities: boot fat initialized
configuration: FATs=2 filesystem=fat mount.fstype=vfat mount.options=rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro state=mounted
*-volume:1
description: EXT4 volume
vendor: Linux
physical id: 2
bus info: scsi@0:0.0.0,2
logical name: /dev/sda2
logical name: /
version: 1.0
serial: [redacted]
size: 25GiB
capabilities: journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized
configuration: created=2013-11-06 13:01:57 filesystem=ext4 lastmountpoint=/ modified=2013-11-12 15:38:33 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered mounted=2013-11-12 15:38:33 state=mounted
*-volume:2
description: Linux swap volume
vendor: Linux
physical id: 3
bus info: scsi@0:0.0.0,3
logical name: /dev/sda3
version: 1
serial: [redacted]
size: 3994MiB
capacity: 3994MiB
capabilities: nofs swap initialized
configuration: filesystem=swap pagesize=4095

It also supports: virtualization technology, S3/S4/S5 sleep states, Wake-on-LAN, and PXE boot.  Sadly, it does not support IPMI :-(

Finally, it's worth noting that I bought the model with the i3 for a specific purpose...  These three machines all have full virtualization capabilities (KVM).  Which means these little boxes, with their dual-core hyper-threaded CPUs and 16GB of RAM are about to become Nova compute nodes in my local OpenStack cluster ;-)  That will be a separate blog post ;-)

Dustin

Read more
Mark Baker

To paraphrase from Mark Shuttleworth’s keynote at the OpenStack Developer Summit last week in Hong Kong, building clouds is no longer exciting. It’s easy. That’s somewhat of an exaggeration, of course, as clouds are still a big choice for many enterprises, but there is still a lot of truth in Mark’s sentiment. The really interesting part about the cloud now is what you actually do with it, how you integrate it with existing systems, and how powerful it can be.

OpenStack has progressed tremendously in its first few years, and Ubuntu’s goal has been to show that it is just as stable, production-ready, easy-to-deploy and manage as any other cloud infrastructure. For our part, we feel we’ve done a good job, and the numbers certainly seem to support that. More than 3,000 people from 50 countries and 480 cities attended the OpenStack Summit in Hong Kong, a new record for the conference, and a recent IDG Connect survey found that 84 percent of enterprises plan to make OpenStack part of their future clouds.

Clearly OpenStack has proven itself. And, now, the OpenStack community’s aim is making it work even better with more technologies, more players and more platforms to do more complex things more easily. These themes were evident from a number of influential contributors at the event and require an increased focus amongst the OpenStack community:

Global Collaboration

OpenStack’s collaborative roots were exemplified early on with the opening address by Daniel Lai, Hong Kong’s CIO, who talked about how global the initially U.S.-founded project has become. There are now developers in more than 400 cities around the world with the highest concentration of developers located in Beijing.

Focus on the Core

One of the first to directly hit on the theme of needing more collaboration, though, was Mark Shuttleworth with a quote from Albert Einstein: “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.” OpenStack has grown fantastically, but we do, as a community, need to ensure we can support that growth rate. OpenStack should focus on the core services and beyond that, provide a mechanism to let many additional technologies plug in, or “let a thousand flowers bloom,” as Mark eloquently put it.

HP’s Monty Taylor also called for more collaboration between all of OpenStack’s players to really continue enhancing the core structure and principle of OpenStack. As he put it, “If your amazing plug-in works but the OpenStack core doesn’t, your plug-in is sitting on a pile of mud.” A bit blunt, but it gets to the point of needing to make sure that the core benefits of OpenStack – that an open and interoperable cloud is the only cloud for the future – are upheld.

Greasing the Wheels of Interoperability

And, that theme of interoperability was at the core of one of Ubuntu’s own announcements at the Hong Kong summit: the Ubuntu OpenStack Interoperability Lab, or Ubuntu OIL. Ubuntu has always been about giving companies choice, especially in the cloud. Our contributions to OpenStack so far have included new hypervisors, SDN stacks and the ability to run different workloads on multiple clouds.

We’ve also introduced Juju, which is one step up from a traditional configuration management tool and is able to distil functions into groups – we call them Charms – for rapid deployment of complex infrastructures and services.

Will all the new capabilities being added to OpenStack, Ubuntu OIL will test all of these options, and other non-OpenStack-centric technologies, to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments.

Collaboration and interoperability testing like this will help ensure OpenStack only becomes easier to use for enterprises, and, thus, more enticing to adopt.

For more information on Ubuntu OIL, or to suggest components for testing in the lab, email us at oil@ubuntu.com or visit http://www.ubuntu.com/cloud/ecosystem/ubuntu-oil

Read more
Prakash

  • US Number 1 Country, India Number 2!
  • Ubuntu No 1 OS.
  • KVM Number 1 Hypervisor.

Read more
Prakash

Netflix has developed s Asgard, a web interface that lets engineers and developers manage their AWS infrastructure using a GUI rather than a command line.

Netflix Asgard is open source.

Paypal a big user of OpenStack has ported Asgard to OpenStack.

Read More: http://gigaom.com/2013/10/02/paypal-has-rebuilt-netflixs-cloud-management-system-for-openstack/

Read more
Mark Baker

When it comes to using Linux on an enterprise server, Ubuntu is generally seen as the new challenger in a market dominated by established vendors specifically targeting enterprises. However, we are seeing signs that this is changing. The W3Techs data showing Ubuntu’s continued growth as a platform for online scale-out infrastructure is becoming well known, but a more recent highlight is a review published by Network World of five commercial Linux-based servers (note registration required to read the whole article).

The title of the review “Ubuntu impresses in Linux enterprise test” is encouraging right from the start, but what may surprise some readers are the areas in which the reviewers rated Ubuntu highly:

 

1. Transparency (Free and commercially supported versions are the same.)

This has long been a key part of Ubuntu and we are pleased that its value is gaining broader recognition. From an end user perspective this model has many benefits, primarily the zero migration cost of moving between an unsupported environment (say, in development) and a supported one (in production). With many organisations moving towards models of continuous deployment this can be extremely valuable.

2. Management tools

The reviewers seemed particularly impressed with the management tools that come with Ubuntu, supported with Ubuntu Advantage: Metal as a Service (MAAS), for rapid bare metal provisioning; Juju for service deployment and orchestration; and Landscape for monitoring, security and maintenance management. At Canonical we have invested significantly in these tools over the last few years, so it is good to know that the results have been well received.

Landscape Cloud Support

Landscape Cloud Support

3. Cloud capability

The availability of cloud images that run on public clouds is called out as being valuable, as is the inclusion of OpenStack to be able to create an OpenStack Cloud. Cloud has been a key part of Ubuntu’s focus since 2008, when we started to create and publish images onto EC2. With the huge growth of Amazon and the more recent rapid adoption of OpenStack, having cloud support baked into Ubuntu and instantly available to end users is valuable.

4. Virtualisation support

It is sometimes thought that Ubuntu is not a great virtualisation platform, mainly because it is not really marketed as being one. The reality, as recognised by the Network World reviewers, is that Ubuntu has great hypervisor support. Like some other vendors we default to KVM for general server virtualisation, but when it comes to hypervisor support for Infrastructure as a Service (IaaS), Ubuntu is far more hypervisor agnostic than many others, supporting not only KVM, but VMware ESXi, and Xen as well. Choice is a good thing.

Of course there are areas of Ubuntu that the reviewers believed to be weak – installation being the primary one. We’ll take this onboard and are confident that future releases will deliver an improved installation experience. There are areas that you could suggest are important to an enterprise that are not covered in the review – commercial application support being one – but the fact remains that viewed as a platform in its own right, with a vast array of open source applications available via Juju, Ubuntu seems to be on the right path. If it continues this way, soon it could well cease to be the challenger and become the leader.

Read more
Robbie

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100′s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50 :-).

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ’juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware. ;)


Read more
Prakash

Read more
Dustin Kirkland

UPDATE: I wrote this charm and blog post before I saw the unfortunate news that the UbuntuForums.org along with Apple's Developer websites had been very recently compromised and user database stolen.  Given that such events have sadly become commonplace, the instructions below can actually be used by proactive administrators to identify and disable weak user passwords, expecting that the bad guys are already doing the same.

It's been about 2 years since I've written a Juju charm.  And even those that I wrote were not scale-out applications.

I've been back at Canonical for two weeks now, and I've been spending some time bringing myself up to speed on the cloud projects that form the basis for the Cloud Solution products, for which I'm responsible. First, I deployed MAAS, and then brought up a small Ubuntu OpenStack cluster.  Finally, I decided to tackle Juju and rather than deploying one of the existing charms, I wanted to write my own.

Installing Juju

Juju was originally written in Python, but has since been ported to Golang over the last 2+ years.  My previous experience was exclusively with the Python version of Juju, but all new development is now focused on the Golang version of Juju, also known as juju-core.  So at this point, I decided to install juju-core from the 13.04 (raring) archive.

sudo apt-get install juju-core

I immediately hit a couple of bugs in the version of juju-core in 13.04 (1.10.0.1-0ubuntu1~ubuntu13.04.1), particularly Bug #1172973.  Life is more fun on the edge anyway, so I upgraded to a daily snapshot from the PPA.

sudo apt-add-repository ppa:juju/devel
sudo apt-get update
sudo apt-get install juju-core

Now I'm running juju-core 1.11.2-3~1414~raring1, and it's currently working.

Configuring Juju

Juju can be configured to use a number of different cloud backends as "providers", notably, Amazon EC2, OpenStack, MAAS, and HP Cloud.

For my development, I'm using Canonical's internal deployment of OpenStack, and so I configured my environment accordingly in ~/.juju/environments.yaml:

default: openstack
environments:
openstack:
type: openstack
admin-secret: any-secret-you-choose-randomly
control-bucket: any-bucket-name-you-choose-randomly
default-series: precise
auth-mode: userpass

Using OpenStack (or even AWS for that matter) also requires defining a number of environment variables in an rc-file.  Basically, you need to be able to launch instances using euca2ools or ec2-api-tools.  That's outside of the scope of this post, and expected as a prerequisite.

The official documentation for configuring your Juju environment can be found here.

Choosing a Charm-able Application

I have previously charmed two small (but useful!) webapps that I've written and continue to maintain -- Pictor and Musica.  These are both standalone web applications that allow you to organize, serve, share, and stream your picture archive and music collection.  But neither of these "scale out", really.  They certainly could, perhaps, use a caching proxy on the front end, and shared storage on the back end.  But, as I originally wrote them, they do not.  Maybe I'll update that, but I don't know of anyone using either of those charms.

In any case, for this experiment, I wanted to write a charm that would "scale out", with Juju's add-unit command.  I wanted to ensure that adding more units to a deployment would result in a bigger and better application.

For these reasons, I chose the program known as John-the-Ripper, or just john.  You can trivially install it on any Ubuntu system, with:

sudo apt-get install john

John has been used by Linux system administrators for over a decade to test the quality of their user's passwords.  A root user can view the hashes that protect user passwords in files like /etc/shadow or even application level password hashes in a database.  Effectively, it can be used to "crack" weak passwords.  There are almost certainly evil people using programs like john to do malicious things.  But as long as the good guys have access to a program like john too, they can ensure that their own passwords are impossible to crack.

John can work in a number of different "modes".  It can use a dictionary of words, and simply hash each of those words looking for a match.  The john-data package ships a word list in /usr/share/john/password.lst that contains 3,000+ words.  You can find much bigger wordlists online as well, such as this one, which contains over 2 million words.

John can also generate "twists" on these words according to some rules (like changing E's to 3's, and so on).  And it can also work in a complete brute force mode, generating every possible password from various character sets.  This, of course, will take exponentially longer run times, depending on the length of the password.

Fortunately, John can run in parallel, with as many workers as you have at your disposal.  You can run multiple processes on the same system, or you can scale it out across many systems.  There are many different approaches to parallelizing John, using OpenMP, MPI, and others.

I took a very simple approach, explained in the manpage and configuration file called "External".  Basically, in the /etc/john/john.conf configuration file, you tell each node how many total nodes exist, and which particular node they are.  Each node uses the same wordlist or sequential generation algorithm, and indexes these.  The node modulates the current index by the total number of nodes, and tries the candidate passwords that match their own id.  Dead simple :-)  I like it.

# Trivial parallel processing example
[List.External:Parallel]
/*
* This word filter makes John process some of the words only, for running
* multiple instances on different CPUs. It can be used with any cracking
* mode except for "single crack". Note: this is not a good solution, but
* is just an example of what can be done with word filters.
*/
int node, total; // This node's number, and node count
int number; // Current word number
void init()
{
node = 1; total = 2; // Node 1 of 2, change as appropriate
number = node - 1; // Speedup the filter a bit
}
void filter()
{
if (number++ % total) // Word for a different node?
word = 0; // Yes, skip it
}

This does, however, require some way of sharing the inputs, logs, and results across all nodes.  Basically, I need a shared filesystem.  The Juju charm collection has a number of shared filesystem charms already implemented.  I chose to use NFS in my deployment, though I could have just as easily used Ceph, Hadoop, or others.

Writing a Charm

The official documentation on writing charms can be found here.  That's certainly a good starting point, and I read all of that before I set out.  I also spent considerable time in the #juju IRC channel on irc.freenode.net, talking to Jorge and Marco.  Thanks, guys!

The base template of the charm is pretty simple.  The convention is to create a charm directory like this, and put it under revision control.

mkdir -p precise/john
bzr init .

I first needed to create the metadata that will describe my charm to Juju.  My charm is named john, which is an application known as "John the Ripper", which can test the quality of your passwords.  I list myself as the maintainer.  This charm requires a shared filesystem that implements the mount interface, as my charm will call some hooks that make use of that mount interface.  Most importantly, this charm may have other peers, which I arbitrarily called workers.  They have a dummy interface (not used) called john.  Here's the metadata.yaml:

name: john
summary: "john the ripper"
description: |
John the Ripper tests the quality of system passwords
maintainer: "Dustin Kirkland"
requires:
shared-fs:
interface: mount
peers:
workers:
interface: john

I also have one optional configuration parameter, called target_hashes.  This configuration string will include the input data that john will work on, trying to break.  This can be one to many different password hashes to crack.  If this isn't specified, this charm actually generates some random ones, and then tries to break those.  I thought that would be nice, so that it's immediately useful out of the box.  Here's config.yaml:

options:
target_hashes:
type: string
description: input password hashes

There's a couple of other simple files to create, such as copyright:

Format: http://dep.debian.net/deps/dep5/

Files: *
Copyright: Copyright 2013, Dustin Kirkland , All Rights Reserved.
License: GPL-3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see .

README and revision are also required.

And the Magic -- Hooks!

The real magic happens in a set of very specifically named hooks.  These are specially named executables, which can be written in any language.  For my purposes, shell scripts are more than sufficient.

The Install Hook

The install hook is what is run at installation time on each worker node.  I need to install the john and john-data packages, as well as the nfs-common client binaries.  I also make use of the mkpasswd utility provided by the whois package.  And I will also use the keep-one-running tool provided by the run-one package.  Finally, I need to tweak the configuration file, /etc/john/john.conf, on each node, to use all of the CPU, save results every 10 seconds (instead of every 10 minutes), and to use the much bigger wordlist that we're going to fetch.  Here's hooks/install:

#!/bin/bash
set -eu
juju-log "Installing all components"
apt-get update
apt-get install -qqy nfs-common john john-data whois run-one
DIR=/var/lib/john
mkdir -p $DIR
ln -sf $DIR /root/.john
sed -i -e "s/^Idle = .*/Idle = N/" /etc/john/john.conf
sed -i -e "s/^Save = .*/Save = 10/" /etc/john/john.conf
sed -i -e "s:^Wordlist = .*:Wordlist = $DIR\/passwords.txt:" /etc/john/john.conf
juju-log "Installed packages"

The Start Hook

The start hook defines how to start this application.  Ideally, the john package would provide an init script or upstart job that cleanly daemonizes its workers, but it currently doesn't.  But for a poor-man's daemonizer, I love the keep-one-running utility (written by yours truly).  I'm going to start two copies of john utility, one that runs in wordlist mode, trying every one of the 2 million words in my wordlist, as well as a second, which tries every combinations of characters in an incremental, brute force mode.  These binaries are going to operate entirely in the shared /var/lib/john NFS mount point.  Each copy on each worker node will need to have their own session file.  Here's hooks/start:

#!/bin/bash
set -eu
juju-log "Starting john"
DIR=/var/lib/john
keep-one-running john -incremental -session:$DIR/session-incremental-$(hostname) -external:Parallel $DIR/target_hashes &
keep-one-running john -wordlist:$DIR/passwords.txt -session:$DIR/session-wordlist-$(hostname) -external:Parallel $DIR/target_hashes &

The Stop Hook

The stop hook defines how to stop the application.  Here, I'll need to kill the keep-one-running processes which wrap john, since we don't have an upstart job or init script.  This is perhaps a little sloppy, but perfectly functional.  Here's hooks/stop:

#!/bin/bash
set -eu
juju-log "Stopping john"
killall keep-one-running || true

The Workers Relation Changed Hook

This hook defines the actions that need to be taken each time another john worker unit is added to the service.  Basically, each worker needs to recount how many total workers there are (using the relation-list command), determine their own id (from $JUJU_UNIT_NAME), update their /etc/john/john.conf (using sed), and then restart their john worker processes.  The last part is easy since we're using keep-one-running; we simply need to killall john processes, and keep-one-running will automatically respawn new processes that will read the updated configuration file.  This is hooks/workers-relation-changed:

#!/bin/bash
set -eu
DIR="/var/lib/john"
update_unit_count() {
node=$(echo $JUJU_UNIT_NAME | awk -F/ '{print $2}')
node=$((node+1))
total=$(relation-list | wc -l)
total=$((total+1))
sed -i -e "s/^\s\+node = .*; total = .*;.*$/ node = $node; total = $total;/" /etc/john/john.conf
}
restart_john() {
killall john || true
# It'll restart itself via keep-one-running, if we kill it
}
update_unit_count
restart_john

The Configuration Changed Hook

All john worker nodes will operate on a file in the shared filesystem called /var/lib/john/target_hashes.  I'd like the administrator who deployed this service to be able to dynamically update that file and signal all of her worker nodes to restart their john processes.  Here, I used the config-get juju command, and again restart by simply killing the john processes and letting keep-one-running sort out the restart.  This is handled here in hooks/config-changed:

#!/bin/bash
set -e
DIR=/var/lib/john
target_hashes=$(config-get target_hashes)
if [ -n "$target_hashes" ]; then
# Install the user's supplied hashes
echo "$target_hashes" > $DIR/target_hashes
# Restart john
killall john || true
fi

The Shared Filesystem Relation Changed Hook

By far, the most complicated logic is in hooks/shared-fs-relation-changed.  There's quite a bit of work we need to here, as soon as we can be assured that this node has successfully mounted its shared filesystem.  There's a bit of boilerplate mount work that I borrowed from the owncloud charm.  Beyond that, there's a bit of john-specific work.  I'm downloading the aforementioned larger wordlist.  I install the target hash, if specified in the configuration; otherwise, we just generate 10 random target passwords to try and crack.  We also symlink a bunch of john's runtime shared data into the NFS directory.  For no good reason, john expects a bunch of stuff to be in the same directory.  Of course, this code could really use some cleanup.  Here it is again, non-perfect, but functional hooks/shared-fs-relation-changed:
#!/bin/bash
set -eu

remote_host=`relation-get private-address`
export_path=`relation-get mountpoint`
mount_options=`relation-get options`
fstype=`relation-get fstype`
DIR="/var/lib/john"

if [ -z "${export_path}" ]; then
juju-log "remote host not ready"
exit 0
fi

local_mountpoint="$DIR"

create_local_mountpoint() {
juju-log "creating local mountpoint"
umask 022
mkdir -p $local_mountpoint
chown -R ubuntu:ubuntu $local_mountpoint
}
[ -d "${local_mountpoint}" ] || create_local_mountpoint

share_already_mounted() {
`mount | grep -q $local_mountpoint`
}

mount_share() {
for try in {1..3}; do
juju-log "mounting share"
[ ! -z "${mount_options}" ] && options="-o ${mount_options}" || options=""
mount -t $fstype $options $remote_host:$export_path $local_mountpoint \
&& break

juju-log "mount failed: ${local_mountpoint}"
sleep 10

done
}

download_passwords() {
if [ ! -s $DIR/passwords.txt ]; then
# Grab a giant dictionary of passwords, 20MB, 2M passwords
juju-log "Downloading password dictionary"
cd $DIR
# http://www.breakthesecurity.com/2011/12/large-password-list-free-download.html
wget http://dazzlepod.com/site_media/txt/passwords.txt
juju-log "Done downloading password dictionary"
fi
}

install_target_hashes() {
if [ ! -s $DIR/target_hashes ]; then
target_hashes=$(config-get target_hashes)
if [ -n "$target_hashes" ]; then
# Install the user's supplied hashes
echo "$target_hashes" > $DIR/target_hashes
else
# Otherwise, grab some random ones
i=0
for p in $(shuf -n 10 $DIR/passwords.txt); do
# http://openwall.info/wiki/john/Generating-test-hashes
printf "user${i}:%s\n" $(mkpasswd -m md5 $p) >> $DIR/target_hashes
i=$((i+1))
done
fi
fi
for i in /usr/share/john/*; do
ln -sf $i /var/lib/john
done
}

apt-get -qqy install rpcbind nfs-common
share_already_mounted || mount_share
download_passwords
install_target_hashes

Deploying the Service

If you're still with me, we're ready to deploy this service and try cracking some passwords!  We need to bootstrap our environment, and deploy the stock nfs charm.  Next, branch my charm's source code, and deploy it.  I deployed it here across a whopping 18 units!  I currently have a quota of 20 small instances I can run our private OpenStack.  Two of those instances are used by the Juju bootstrap node and by the NFS server.  So the other 18 will be NFS clients running john processes.

juju bootstrap
juju deploy nfs
bzr branch lp:~kirkland/+junk/john precise
juju deploy -n 18 --repository=precise local:precise/john
juju add-relation john nfs
juju status

Once everything is up and ready, running and functional, my status looks like this:

machines:
"0":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.230
instance-id: 98090098-2e08-4326-bc73-22c7c6879b95
series: precise
"1":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.7
instance-id: 449c6c8c-b503-487b-b370-bb9ac7800225
series: precise
"2":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.193
instance-id: 576ffd6f-ddfa-4507-960f-3ac2e11ea669
series: precise
"3":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.215
instance-id: 70bfe985-9e3f-4159-8923-60ab6d9f7d43
series: precise
"4":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.221
instance-id: f48364a9-03c0-496f-9287-0fb294bfaf24
series: precise
"5":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.223
instance-id: 62cc52c4-df7e-448a-81b1-5a3a06af6324
series: precise
"6":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.231
instance-id: f20dee5d-762f-4462-a9ef-96f3c7ab864f
series: precise
"7":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.239
instance-id: 27c6c45d-18cb-4b64-8c6d-b046e6e01f61
series: precise
"8":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.240
instance-id: 63cb9c91-a394-4c23-81bd-c400c8ec4f93
series: precise
"9":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.242
instance-id: b2239923-b642-442d-9008-7d7e725a4c32
series: precise
"10":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.249
instance-id: 90ab019c-a22c-41d3-acd2-d5d7c507c445
series: precise
"11":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.252
instance-id: e7abe8e1-1cdf-4e08-8771-4b816f680048
series: precise
"12":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.254
instance-id: ff2b6ba5-3405-4c80-ae9b-b087bedef882
series: precise
"13":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.255
instance-id: 2b019616-75bc-4227-8b8b-78fd23d6b8fd
series: precise
"14":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.1
instance-id: ecac6e11-c89e-4371-a4c0-5afee41da353
series: precise
"15":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.3
instance-id: 969f3d1c-abfb-4142-8cc6-fc5c45d6cb2c
series: precise
"16":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.4
instance-id: 6bb24a01-d346-4de5-ab0b-03f51271e8bb
series: precise
"17":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.5
instance-id: 924804d6-0893-4e56-aef2-64e089cda1be
series: precise
"18":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.11
instance-id: 5c96faca-c6c0-4be4-903e-a6233325caec
series: precise
"19":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.15
instance-id: 62b48da2-60ea-4c75-b5ed-ffbb2f8982b5
series: precise
services:
john:
charm: local:precise/john-3
exposed: false
relations:
shared-fs:
- nfs
workers:
- john
units:
john/0:
agent-state: started
agent-version: 1.11.0
machine: "2"
public-address: 10.99.60.193
john/1:
agent-state: started
agent-version: 1.11.0
machine: "3"
public-address: 10.99.60.215
john/2:
agent-state: started
agent-version: 1.11.0
machine: "4"
public-address: 10.99.60.221
john/3:
agent-state: started
agent-version: 1.11.0
machine: "5"
public-address: 10.99.60.223
john/4:
agent-state: started
agent-version: 1.11.0
machine: "6"
public-address: 10.99.60.231
john/5:
agent-state: started
agent-version: 1.11.0
machine: "7"
public-address: 10.99.60.239
john/6:
agent-state: started
agent-version: 1.11.0
machine: "8"
public-address: 10.99.60.240
john/7:
agent-state: started
agent-version: 1.11.0
machine: "9"
public-address: 10.99.60.242
john/8:
agent-state: started
agent-version: 1.11.0
machine: "10"
public-address: 10.99.60.249
john/9:
agent-state: started
agent-version: 1.11.0
machine: "11"
public-address: 10.99.60.252
john/10:
agent-state: started
agent-version: 1.11.0
machine: "12"
public-address: 10.99.60.254
john/11:
agent-state: started
agent-version: 1.11.0
machine: "13"
public-address: 10.99.60.255
john/12:
agent-state: started
agent-version: 1.11.0
machine: "14"
public-address: 10.99.61.1
john/13:
agent-state: started
agent-version: 1.11.0
machine: "15"
public-address: 10.99.61.3
john/14:
agent-state: started
agent-version: 1.11.0
machine: "16"
public-address: 10.99.61.4
john/15:
agent-state: started
agent-version: 1.11.0
machine: "17"
public-address: 10.99.61.5
john/16:
agent-state: started
agent-version: 1.11.0
machine: "18"
public-address: 10.99.61.11
john/17:
agent-state: started
agent-version: 1.11.0
machine: "19"
public-address: 10.99.61.15
nfs:
charm: cs:precise/nfs-3
exposed: false
relations:
nfs:
- john
units:
nfs/0:
agent-state: started
agent-version: 1.11.0
machine: "1"
public-address: 10.99.60.7

Obtaining the Results

And now, let's monitor the results.  To do this, I'll ssh to any of the john worker nodes, move over to the shared NFS directory, and use the john -show command in a watch loop.

keep-one-running juju ssh john/0
sudo su -
cd /var/lib/john
watch john -show target_hashes

And the results...
Every 2.0s: john -show target_hashes

user:260775
user1:73832100
user2:829171kzh
user3:pf1vd4nb
user4:7788521312229
user5:saksak
user6:rongjun2010
user7:2312010
user8:davied
user9:elektrohobbi

10 password hashes cracked, 0 left

Within a few seconds, this 18-node cluster has cracked all 10 of the randomly chosen passwords from the dictionary.  That's only mildly interesting, as my laptop can do the same in a few minutes, if the passwords are already in the wordlist.  What's far more interesting is in randomly generating a password and passing that as a new configuration to our running cluster and letting it crack that instead.

Modifying the Configuration Target Hash

Let's generate a random password using apg.  We'll then need to hash this and create a string in the form of username:pwhash that john can understand.  Finally, we'll pass this to our cluster using Juju's set action.

passwd=$(apg -a 0 -n 1 -m 6 -x 6)
target=$(printf "user0:%s\n" $(mkpasswd -m md5 $passwd))
juju set john target_hashes="$target"

This was a 6 character password, consisting of 52 random characters (a-z, A-Z), almost certainly not in our dictionary.  526 = 19,770,609,664, or about 19 billion letter combinations we need to test.  According to the john -test command, a single one of my instances can test about 12,500 MD5 hashes per second.  So with a single instance, this would take a maximum of 526 / 12,500 / 60 / 60 = 439 hours. Or 18 days :-) Well, I happen to have exactly 18 instances, so we should be able to test the entire wordspace in about 24 hours.

So I threw all 18 instances at this very problem and let it run over a weekend. And voila, we got a little lucky, and cracked the password, Uvneow, in 16 hours!

In Conclusion

I don't know if this charm will ever land in the official charm store.  That really wasn't the goal of this exercise for me.  I simply wanted to bring myself back up to speed on Juju, play with the port to Golang, experiment with OpenStack as a provider for Juju, and most importantly, write a scalable Juju charm.

This particularly application, john, is actually just one of a huge class of MPI-compatible parallelizable applications that could be charmed for Juju.  The general design, I think, should be very reusable by you, if you're interested.  Between the shared file system and the keep-one-running approach, I bet you could charm any one of a number of scalable applications.  While I'm not eligible, perhaps you might consider competing for cash prizes in the Juju Charm Championship.

Happy charming,
:-Dustin

Read more
Mark Baker

Juju, the leading tool for continuous deployment, continuous integration (CI/CD), and cloud-neutral orchestration, now has a refreshed GUI with smoother workflows for integration professionals spinning up many services across clouds like Amazon EC2 and a range of public OpenStack providers. The new GUI speeds up service design – conceptual modelling of service relationships – as well as actual deployment, providing a visual map of the relationships between services.

“The GUI is now a first-class part of the Juju experience” said Gary Poster, whose team lead the work, “with an emphasis on rapid access to the collection of service charms and better visualisation of the deployment in question”. In this milestone the Juju GUI can act as a whiteboard, so a user can mock up the service orchestration they intend to create using the same Juju GUI that they will use to manage their real, live deployments. Users can experience the new interface for themselves at jujucharms.com with no need to setup software in advance.

Juju is used by organisations that are constantly deploying and redeploying collections of services. Companies focused on media, professional services, and systems integration are the heaviest users, who benefit from having repeatable best-practice deployments across a range of cloud environments.

Juju uniquely enables the reuse of shared components called ‘charms’ for common parts of a complex service. A large portfolio of existing open source components is available from a public Charm collection, and browsing that collection is built into the new GUI. Charms are easy to find and review in the GUI, with full documentation instantly accessible. Featured, recommended and popular charms are highlighted for easy discovery. Each Charm now has more detailed information including test results from all supported providers, download count, related Charms, and a Charm code quality rating. The Charm collection includes both certified, supported Charms, and a wider range of ad-hoc Charms that are published by a large community of contributors.

The simple browser-based interface makes it easy to find reusable open source charms that define popular services like Hadoop, Storm, Ceph, OpenStack, MySQL, RabbitMQ, MongoDB, Cassandra, Mediawiki and WordPress. Information about each service, such as configuration options, is immediately available, and the charms can then be dragged and dropped directly on a canvas where they can be connected to other services, deployed and scaled. It’s also possible to export these service topologies into a human-readable and -editable format that can be shared within a team or published as a reference architecture for that deployment.

Recent additions to the public Charm collection include OpenVPN AS, Liferay, Storm and Varnish. For developers the new GUI and Charm Browser mean that their Charms are now much more discoverable. For those taking part in the Charm Championship, it’s easier to upload their Charms and use the GUI to connect them into a full solution for entry into the competition. Submit your best Charmed solution for the possibility of winning $10,000.

The management interface for Charm authors has also been enhanced and is available at  http://manage.jujucharms.com/ immediately.

See how you can use Juju to deploy OpenStack:

The current version of Juju supports Amazon EC2, HP Cloud and many other OpenStack clouds, as well as in-memory deployment for test and dev scenarios. Juju is on track for a 1.12 release in time for Ubuntu 13.10 that will enhance scalability for very large deployments, and a 2.0 release in time for Ubuntu 14.04 LTS.

See it demoed: We’ll be showing off the new Juju GUI and charm browser at OSCON on Tuesday 23rd at 9:00AM in the Service Orchestration In the Cloud with Juju workshop.

Read more
Mark Baker

We are pleased to announce a seriously good addition to the our product team: Ratnadeep (Deep) Bhattacharjee. Deep joins Canonical as Director of Cloud Product Management from VMware where he led its Cloud Infrastructure Platform effort and has a solid understanding of customer needs as they continue to move to virtual and cloud infrastructure.

Ubuntu has fast become the operating system of choice for cloud computing and Ubuntu is the most popular platform for OpenStack. With Deep’s direction, we plan to continue to lead Ubuntu OpenStack into enterprises, carriers and service providers looking for new ways to deliver next generation infrastructure without the ‘enterprise’ price tag and lock in. He will also be key in building out our great integration story with VMWare to help customers who will run heterogeneous environments. Welcome Deep!

Read more