If you have written or know how to write a Quickly template, I’d like to get some help making one for Singlet Lenses and Scopes.Read more
I’ve finally had a little extra time to get back to working on Singlet. There’s been a lot of progress since the first iteration. To start with, Singlet had to be upgraded to work with the new Lens API introduced when Unity 5.0 landed in the Precise repos. Luckily the Singlet API didn’t need to change, so any Singlet lenses written for Oneiric and Unity 4 will only need the latest Singlet to work in Precise.
The more exciting development, though, is that Singlet 0.2 introduces an API for Scopes. This means you can write Lenses that support external scopes from other authors, as well as external Scopes for existing lenses. They don’t both need to be based on Singlet either, you can write a Singlet scope for the Music Lens if you wanted to, and non-Singlet scopes can be written for your Singlet lens. They don’t even have to be in Python.
In order to make the Scope API, I chose to convert my previous LoCo Teams Portal lens into a generic Community lens and separate LoCo Teams scope. The Lens itself ends up being about as simple as can be:
from singlet.lens import Lens, IconViewCategory, ListViewCategory
name = 'community'
description = 'Ubuntu Community Lens'
search_hint = 'Search the Ubuntu Community'
icon = 'community.svg'
category_order = ['teams', 'news', 'events', 'meetings']
teams = IconViewCategory("Teams", 'ubuntu-logo')
news = ListViewCategory("News", 'news-feed')
events = ListViewCategory("Events", 'calendar')
meetings = ListViewCategory("Meetings", 'applications-chat')
As you can see, it’s really nothing more that some meta-data and the categories. All the real work happens in the scope:
name = 'locoteams'
search_hint = 'Search LoCo Teams'
search_on_blank = True
lens = 'community'
categories = ['teams', 'news', 'events', 'meetings']
def __init__(self, *args, **kargs):
super(LocoTeamsScope, self).__init__(*args, **kargs)
self._ltp = locodir.LocoDirectory()
self.lpusername = None
import ConfigParser as configparser
bzrconf = configparser.ConfigParser()
self.lpusername = bzrconf.get('DEFAULT', 'launchpad_username')
def search(self, search, model, cancellable):
I left out the actual search code, because it’s rather long and most of it isn’t important when talking about Singlet itself. Just like the Lens API, a Singlet Scope uses an inner Meta class for meta-data. The most important fields here are the ‘lens’ and ‘categories’ variables. The ‘lens’ tells Singlet the name of the lens your scope is for. Singlet uses this to build DBus names and paths, and also to know where to install your scope. The ‘categories’ list will let you define a result item’s category using a descriptive name, rather than an integer.
model.append('http://loco.ubuntu.com/events/%s/%s/detail/' % (team['lp_name'], tevent['id']), team['mugshot_url'], self.lens.events, "text/html", tevent['name'], '%s\n%s' % (tevent['date_begin'], tevent['description']), '')
It’s important that the order of the categories in the Scope’s Meta matches the order of categories defined in the Lens you are targeting, since in the end it’s still just the position number that’s being passed back to the Dash.
After all this, I still had a little bit of time left in the day. And what good is supporting external scopes if you only have one anyway? So I spent 30 minutes creating another scope, one that will read from the Ubuntu Planet news feed:
The next step is to add some proper packaging to get these into the Ubuntu Software Center, but you impatient users can get them either from their respective bzr branches, or try the preliminary packages from the One Hundred Scopes PPA.
 Note that while lenses written for Singlet 0.1 will work in Singlet 0.2 on Precise, the reverse is not necessarily true. Singlet 0.2, as well as lenses and scopes written for it, will not work on Oneiric.Read more
In an effort to increase the exposure of the work being done to improve the Unity desktop, we are moving discussions from the code-named #ayatana channel on freenode to the more discoverable #ubuntu-unity channel (still on freenode). If you want to talk to Unity developers, find out what’s happening, or join the growing ranks of community contributors, this is a good place to start.Read more
By now you should have heard that Canonical is branching out from the desktop and has begun work on getting Ubuntu on TVs. Lost in all the discussion of OEM partnerships and content distribution agreements is a more exciting (from my perspective) topic: Ubuntu TV shows why Unity was the right choice for Canonical to make.
Ubuntu TV doesn’t just look like Unity, it is Unity. A somewhat different configuration, visually, from the desktop version, but fundamentally the same. Unity isn’t just a top panel and side launcher, it is a set of technologies and APIs: Indicators, Lenses, Quick Lists, DBus menus, etc. All of those components will be the same in Ubuntu TV as they are on the desktop, even if their presentation to the user is slightly different. When you see Unity on tablets and phones it will be the same story.
Having the same platform means that Ubuntu offers developers a single development target, whether they are writing an application for the desktop, TVs, tablets or phones. There is only one notifications API, only one search API, only one cloud syncing API. Nobody currently offers that kind of unified development platform across all form factors, not Microsoft, not Google, not Apple.
If you are writing the next Angry Birds or TweetDeck, would you want to target a platform that only exists on one or two form factors, or one that will allow your application to run on all of them without having to be ported or rewritten?
Anybody with multiple devices has found an application for one that isn’t available for another. How many times have we wanted the functionality offered by one of our desktop apps available to us when we’re on the go? How many games do you have on your phone that you’d like to have on your laptop too? With Ubuntu powered devices you will have what you want where you want it. Combine that with Ubuntu One and your data will flow seamlessly between them as well.
None of this would have been possible with Gnome 2. It was a great platform for it’s time, when there was a clear distinction between computers and other devices. Computers had medium-sized screens, a keyboard and a mouse. They didn’t have touchscreens, they didn’t change aspect ratio when turned sideways. Devices lacked the ability to install third party applications, the mostly lacked network connectivity, and they had very limited storage and processing capabilities.
But now laptops and desktops have touch screens, phones have multi-core, multi-GHz processors. TVs and automobiles are both getting smarter and gaining more and more of the features of both computers and devices. And everything is connected to the Internet. We need a platform for this post-2010 computing landscape, something that can be equally at home with a touch screen as it is with a mouse, with a 4 inch and a 42 inch display.
Unity is that platform.Read more
Back when I first started writing Unity lenses, I lamented the complexity required to get even the most basic Lens written and running. I wrote in that post about wanting to hide all of that unnecessary complexity. Well now I am happy to announce the first step towards that end: Singlet.
In optics, a “singlet” is a very simple lens. Likewise, the Singlet project aims to produce simple Unity lenses. Singlet targets opportunistic programmers who want to get search results into Unity with the least amount of work. By providing a handful of Python meta classes and base classes, Singlet lets you write a basic lens with a minimal amount of fuss. It hides all of the boilerplate code necessary to interface with GObject and DBus, leaving the developer free to focus solely on the purpose of their lens. With Singlet, the only thing a Lens author really needs to provide is a single search function.
So what does a Singlet Lens look like? Here is a sample of the most basic lens, which produced the screenshot above:
#! /usr/bin/python from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from singlet.utils import run_lens class TestLens(SingleScopeLens): class Meta: name = 'test' cat1 = IconViewCategory("Cat One", "stock_yet") cat2 = ListViewCategory("Cat Two", "hint") def search(self, phrase, results): results.append('http://google.com/search?q=%s' % phrase, 'file', self.cat1, "text/html", phrase, phrase, '') results.append('http://google.com/search?q=%s' % phrase, 'file', self.cat2, "text/html", phrase, phrase, '') if __name__ == "__main__": import sys run_lens(TestLens, sys.argv)
As you can see, there isn’t much to it. SingleScopeLens is the first base class provided by Singlet. It creates an inner-scope for you, and connects it to the DBus events for handling a user’s search events. The three things you need to do, as a Lens author, is give it a name in the Meta class, define at least one Category, and most importantly implement your custom search(self, phrase, results) method.
Django developers will notice a similarity between Singlet Lenses and Django Models in their use of an inner Meta class. In fact, they work exactly the same way, though with different properties. At a minimum, you will need to provide a name for your lens. Everything else can either use default values, or will be extrapolated from the name. Everything in your Meta class, plus defaults and generated values, will be accessible in <your_class>._meta later on.
Again borrowing from Django Models, you add categories to your Singlet Lens by defining it in the Class’s scope itself, rather than in the __init__ method. One thing that I didn’t like about Categories when writing my previous lenses was that I couldn’t reference them when adding search results to the result model. Instead you have to give the numeric index of the category. In Singlet, the variable name you used when defining the category is converted to the numeric index for that category, so you easily reference it again when building your search results. But don’t worry, you category objects are still available to you in <your_class>._meta.categories if you want them.
The core functionality of a Lens is the search. So it makes sense that the majority of your work should happen here. Singlet will call your search method, passing in the current search phrase and an empty results model. From there, it’s up to you to collect data from whatever source you are targeting, and start populating that results model.
Unity knows how to handle common URIs in your results, such as file:// and http:// uris. But often times your lens isn’t going to be dealing with results that map directly to a file or website. For those cases, you need to hook into DBus again to handle the URI of a selected result item, and return a specifically constructed GObject response. With Singlet, all you need to do is define a handle_uri method on your Lens, and it will take care of hooking it into DBus for you. Singlet also provides a couple of helper methods for your return value, either hide_dash_response to hide the dash after you’ve handled the URI, or update_dash_response if you want to leave it open.
Once you’ve defined your lens, you need to be able to initialize it and run it, again using a combination of DBus and GObject. Singlet hides all of this behind the run_lens function in singlet.utils, which you should call at the bottom of your lens file as shown in the above snippet.
There’s more to getting your Lens working that just the code, you also need to specify a .lens file describing your lens to Unity, and a .service file telling dbus about it. Singlet helps you out here too, by providing command line helpers for generating and installing these files. Suppose the code snippet above was in a file called testlens.py, once it’s written you can run “python testlens.py make” and it will write test.lens and unity-test-lens.service into your current directory. The data in these files comes from <your_lens>._meta, including the name, dbus information, description and icon. After running make you can run “sudo python testlens.py install”, this will copy your code and config files to where they need to be for Unity to pick them up and run them.
You can get the current Singlet code by branching lp:singlet. I will be working on getting it built and available via PyPi and a PPA in the near future, but for now just having it on your PYTHONPATH is enough to get started using it. Just be aware that if you make/install a Singlet lens, you need to make the Singlet package available on Unity’s PYTHONPATH as well or it won’t be able to run. I’ve already converted my Dictionary Lens to use Singlet, and will work on others while I grow the collection of Singlet base classes. If anybody has a common yet simple use case they would like me to target, please leave a description in the comments.Read more
Unity certainly has been getting a lot of attention in the past year. Love it or hate it, everybody seems to have something to say, whether it’s about the Launcher, application indicators, or the window control buttons being on the left. But with all the talk about Unity, good and bad, one very unique aspect that hasn’t been getting nearly enough attention are Lenses.
Lenses are a central part of the Unity desktop, and anybody who’s used it will be familiar with the default Application and File lenses, maybe even the Music lens. But there’s so much more to this technology than you might think. In fact, David Callé has recently been spearheading an effort to build out a large number of small but incredibly useful lenses. I first took notice of David’s work when he released his Book lens which, being a huge ebook fan, really brought home the usefulness of Unity Lenses for me.
More recently, David has been writing Scopes for the One Hundred Scopes project. A Scope is the program that feeds results into a Lens. While the Lens defines the categories and filters, it’s the Scopes that do the heavy lifting of finding and organizing the data that will ultimately be displayed on your Dash. If you follow David on Google+, chances are you’ve seen him posting screenshots of one scope after another as he writes them, often multiple of them per day.
Seeing how quickly he was able to write these, I decided to dive in and try it out myself. You can write Lenses in a variety of languages, including Python, my language of choice. I decided to start of with something relatively easy, and something that I’ve personally been missing for a while. I used to use a Gnome2 applet called Deskbar, which let you type in a short search word or phrase, and it presented you with search results and various other options. Included among those was the option to lookup the word in the gnome-dictionary, and I used this option on a startlingly frequent basis. Unfortunately Deskbar fell out of favor and development even before the switch to Gnome 3, and I’d been lacking a quick way to lookup words ever since. So I decided that the Lens I wanted was one that would replace this missing functionality, a Dictionary lens.
My first task was to find out how to write a lens. I checked the Ubuntu Wiki and the Unity portal, both of which offered a lot of technical information about writing lenses, but unfortunately not very much that I found helpful for someone just starting off. In fact, I had to get a rather large amount of one-on-one help from David Callé before I could even get the most basic functionality working.
Lenses and Scopes all communicate with each other and with the Unity Dash via DBus, and for anybody not familiar with DBus this makes for a very steep learning curve. On top of that, writing it in Python means you’ll be relying on GObject Introspection (GI), which is a very nice way of making APIs written in one language automatically available to another, but it also means you’re going to be using the lowest common denominator when it comes to language features. I found that learning to work with these two technologies accounted for 90% or more of the time it took me to write my Dictionary lens. Before I write another Lens or Scope, I plan on wrapping much of the DBus and GI boilerplate and wiring behind a simple, reusable set of Python classes. I hope this will help developers, both newbies and seasoned Unity hackers, in writing simple Scopes by allowing them to focus 90% of their time on writing the code that does the actual searching.
But by the end of the day I had a working Dictionary lens. It uses your local spellcheck dictionary, via python-enchant, to both confirm whether or not the word you typed in is spelled correctly, as well as offer a list of suggested alternatives. I also dug through the gnome-dictionary code and found that it was pulling its definitions from the dict.org online database using an open protocol. Using the python-dictclient I was able to query the same database, and include the start of a word’s definition in the Lens itself.
This lens turned out to be more complex than I had originally envisioned, not just for the features listed above, but also because I needed to override what happened when you clicked on an item. When you build up your results, you have to give each item a unique URI, which is often in the form or an http:// or file:// URL that Gnome knows how to handle. But for results that were just words, I needed to do the handling myself, which meant more DBus wiring to have the mouse click event call a local function. From there I was able to copy the word to the Gnome clipboard or launch gnome-dictionary for viewing the full definition.
After seeing that first Lens running in my Dash, I felt an urge to try another. Since I already had all of the DBus and GI code, I wouldn’t have to mess with all of that and I could focus just on the functionality I wanted. Jorge Castro has been trying to get me to write a lens for the LoCo Teams Portal (formerly LoCo Directory) since the concept was first introduced to Unity under the name “Places”. Since LTP already offers a REST/JSON api, this turned out to be remarkably simple to do. Between the existing DBus/GI code copied from my previous lens, and an existing python client library for the LTP, I was able to get a working lens in only a couple of hours.
For item icons, you can use either a local icon, or a URL to a remote image. For this lens, I used the team’s mugshot URL that LTP pulls from the team’s Launchpad information. When you search, it’ll show matching Teams, as well as Events and Meetings for those teams, and any Event or Meeting that also matches your search criteria. I’ve also added the ability for this lens to lookup your Launchpad username (by checking for it in your bazaar.conf) and defaulting it to display the LoCo Teams you are a member of, as well as their upcoming Events and Meetings.
Both the Dictionary Lens and the LTP Lens lump the Lens and Scope code in the same Python class, but it doesn’t have to be this way. You can write a Scope for someone else’s Lens, and vice versa. In fact, plan on separating the LTP lens into a general purpose Community Lens, with an LTP Scope feeding it results about LoCo Teams. From there, others can write scopes pulling in other community information to be made available on the Dash. This will also be my prototype for a Python-friendly wrapper around all of the DBus and GI work that scope writers probably don’t need to know about anyway.Read more
As promised in my previous post about my Unity Phone Mockups, here’s a look at the TV mockups I’ve been playing with over the last week, along with the reasons behind some of the designs. Links to the source files for these mockups, as well as the mockups by other community contributors, can be found on this wiki page.
These aren’t as innovative as my phone mockups, partly because I didn’t have much of a reference to start from like I did with IOS and Android for phones. I have a DVR from my cable company, but no HTPC or other media center software.
The TV frame and choice of movie (Big Buck Bunny) came from Alan Bell’s template. I added a remote control to that, and added buttons as I found a need for them. In the mockups, I’ve highlighted the buttons you would use to interact with the given screen.
The basic concept was to overlay the Unity components when the user presses the Circle-of-Friends button on the remote, but to otherwise leave whatever they are watching full screen. On this mockup I used applications in the Launcher, and pressing the CoF would put the focus on them the same way Alt-F1 does on the desktop. You could then navigate through them using the arrow buttons on the remote, pressing OK to make your selection.
In this alternate mockup, I replaced the applications with Unity lenses, which I think makes more sense for a TV, since you will more often than not be interested in finding content, not selecting which application to run. Luckily Unity already supports both, so this is just a matter of what the default initial configuration should be, after that the user can put whatever they want in the Launcher.
For opening the dash I envision being able to double-tap the CoF button (or selecting a Lens). In fact, I would like to have this on desktop Unity, since I want quick access to the Launcher more often than I want the Dash. Once the Dash is open, you would use the remote’s arrow buttons to navigate through the items listed, again pressing OK to make your selection.
For accessing the Dash’s filters, I added the “context menu” button to the remote which, when pressed while on the Dash screen, will open the filter sidebar, letting you navigate through the filtering options using the remote’s arrows. Pressing the context menu button again would collapse the filter sidebar and return arrow navigation to the results table.
Likewise I added a Search button to the remote to send focus to the Dash’s search field and activate the on-screen keyboard. Text input from a remote like this will be cumbersome (unless we come up with a monstrosity of a remote like this), so we’ll want to avoid the need for this as much as possible. But since the Dash is highly search-focused, I felt that there needed to be a mockup for this aspect.
This is a screenshot of Shotwell taken from my laptop and scaled to fit a resolution it might be at on a TV screen. I did this to demonstrate how an existing desktop app might look and work without modification on a 10 foot interface. Here again I added a new button to the remote, which is a kind of next/last panel or, more accurately, the tab key on a desktop keyboard. This lets you navigate through widgets and panels on a traditional application like you can using the tab key on a desktop.
I think with some small enhancements to GTK and QT, we can allow application developers to make navigating apps this way easier. Alternately, a new library similar to uTouch could act as an interface between remote control input and applications.
Someone pointed me to YouTube’s LeanBack interface, an HTML5 webapp designed for 10 foot displays. This is a very promising way to deliver applications and contents to internet connected TVs, especially if we ship with a fully functional, HTML5 supporting web browser optimized for control with a remote.
Finally I wanted to show off Media Explorer, a media center application built on the same Gnome technologies as the rest of the Ubuntu desktop.
Just like I did in my last post, I’d like to solicit discussion and feedback on these mockups, and also ask what other scenarios/mockups you would like to see. You can either leave comments here, or join the live conversation in #ubuntu-tv on Freenode.Read more
It’s late, I’m tired, so this is going to be brief. But if I didn’t put something up now, chances are I’d procrastinate to the point where it didn’t matter anymore, so something is better than nothing.
So the buzz all week was about Juju and Charms. It’s a very cool technology that I think is really going to highlight the potential of cloud computing. Until now I always had people comparing the cloud to virtual machines, telling me they already automate deploying VMs, but with Juju you don’t think about machines anymore, virtual of otherwise. It’s all about services, which is really what you want, a service that is doing something for you. You don’t need to care where, or on what, or in combination with some other thing, Juju handles all that automatically. It’s really neat, and I’m looking forward to using it more.
Summit worked this week. In fact, this is the first time in my memory where there wasn’t a problem with the code during UDS. And that’s not because we left it alone either. IS actually moved the entire site to a new server the day before UDS started. We landed several fixes during the week to fix minor inconveniences experienced by IS or the admins. And that’s not even taking into consideration all the last-minute features that were added by our Linaro developers the week prior. But through it all, Summit kept working. That, more than anything else, is testament to the work the Summit developers put in over the last cycle to improve the code quality and development processes, and I am very, very proud that. But we’re not taking a break this cycle. In fact, we had two separate sessions this week about ways to improve the user experience, and will be joined by some professional designers to help us towards that goal.
Ubuntu One eBook syncing
So what started off as an casual question to Stuart Langridge turned into a full blown session about how to sync ebook data using Ubuntu One. We brainstormed several options of what we can sync, including reading position, bookmarks, highlights and notes, as well as ways to sync them in an application agnostic manner. I missed the session on the upcoming Ubuntu One Database (U1DB), but we settled on that being the ideal way of handling this project, and that this project was an ideal test case for the U1DB. For reasons I still can’t explain, I volunteered to develop this functionality, at some point during the next cycle. It’s certainly going to be a learning experience.
Friends! It sure was good to catch up with all of you. Both friends from far-away lands, and those closer to home. Even though we chat on IRC almost constantly, there’s still nothing quite like being face to face. I greatly enjoyed working in the same room with the Canonical ISD team, which has some of the smartest people I’ve ever had the pleasure of working with. It was also wonderful to catch up with all my friends from the community. I don’t know of any other product or project that brings people together the way Ubuntu does, and I’m amazed and overjoyed that I get to be a part of it.Read more
If you’ve been doing anything with Ubuntu lately, chances are you’ve been hearing a lot of buzz about Juju. If you’re attending UDS, then there’s also a good chance that you’ve been to one or more sessions about Juju. But do you know it?
The building blocks for Juju are it’s “charms”, which detail exactly how to deploy and configure services in the Cloud. Writing charms is how you harness the awesome power of Juju. Tomorrow (Friday) there will be a 2 hour session all about writing charms, everything from what they do and how they work, to helping you get started writing your own. Questions will be answers, minds will be inspired, things will be made, so don’t miss out.
(Photo courtesy of http://www.flickr.com/photos/slightlynorth/3977607387/)Read more
Are you both an Ubuntu user and a bibliophile? Want to keep your ebooks synced between all your connected devices, including bookmarks and reading position? If so, join us for this UDS session Thursday, Nov 3rd, where we’ll be talking about how to add that functionality to Ubuntu One.Read more
In a previous post a commentator was explaining his typical web-stack deployment, and boasting about how he “can roll this out on Debian in less than 4 hours”. Now he’s talking about provisioning, installing the OS, installing the services, and configuring everything. That’s easily a day’s work for a capable sysadmin in a corporate environment. At least, it is in my experience. So 4 hours sounds pretty good, doesn’t it? I tell you what, go find your nearest admin and ask them how long it would take for them to get a new WordPress site up and ready for you to start posting to. No really, I mean it, go ask. I’ll wait.
Back? Good. I’m betting that, for many of you, the answer was a day or less, depending on their backlog of work. A lucky few would be able to get their new site in an hour or two. Not bad. But if you were using Ubuntu and cloud technology, you would have had your new site ready in the time that it took you to get that answer.
You think I’m exaggerating? You can get a new world-accessible WordPress site up and running in less than 10 minutes. That’s not marketing hype, Canonical is quite literally putting it’s money where it’s mouth is, by paying for an hour of Ubuntu Server on Amazon’s cloud service. Go to http://try.cloud.ubuntu.com and follow along, and you’ll have a WordPress site up and running in less time than it’ll take you to finish reading this article.
Welcome to the Try Ubuntu project. Yes, we really are footing the bill for this, it won’t cost you a dime. Click that large inviting “Let’s go to the cloud” button to start your adventure in cloud computing.
You will need to sign into Ubuntu SSO, because while we’re perfectly happy to pay for one hour, we do need to prevent abuse. Requiring a login lets us limit this to once per user. If you don’t have an SSO account you can create one now, and really if you’re going to spend any time at all with Ubuntu or the community, you’re going to want one soon or later anyway.
Ubuntu SSO lets you decide what detailed information to send back to any requesting web service. You don’t have to send any of these details back to use this service, but if you have a Launchpad profile and uploaded SSH keys, you’ll get a better experience if you send at least your username.
Next you’ll get to choose what you want running on your trial instance. You’ll also have to agree not to be abusive with your instance, do anything illegal, or generally cause other people problems as a result of our generosity. Seriously, just don’t do it.
Currently we offer a base Ubuntu Server, running just the default installation, as well as the base server plus WordPress, Drupal or MoinMoin. These aren’t pre-made images, they’ll be installed after the new server is provisioned, just like you would do manually, only thanks to cloud-init, we have it all automated. You will use the same Ubuntu Server AMI regardless of which service you choose.
That’s all you need to do! Now click the “Launch” button and the website will ask for a new m1.small instance through Amazon’s EC2 API. This API is available to anyone, by the way, so you can script your own cloud deployments in exactly the same way. We are using the boto python library to access the API from within Django.
After we get an instance reserved, we have to wait for Amazon to start it. It only takes about a minute for Amazon to start up your new instance. In the mean time, this page will periodically refresh itself, and will let you know once your instance has started. Did you notice that “View Cloud-Config” link?
This is what will be run on your new instance as soon as it’s ready. You can copy this script, and use it later to start up your own permanent Ubuntu Server instances on AWS or any other EC2 compatible cloud host. It is this script that will install Apache, MySQL and WordPress, configure them all properly, and get them all running.
Now your instance has started, and so has your countdown clock. At this point Ubuntu is fully booted, and your cloud-init script is busy installing all the packages and dependencies needed to run WordPress, all you have to do is sit back and wait. Deploying is hard work huh?. Don’t worry, I won’t tell your boss.
Hope you didn’t get too comfy, because 3 minutes later and we’re done! Here you’ve given the command to SSH into your new instance. If you’re running this from Windows, you’ll need to get something like PuTTY, because last time I checked Redmond still thinks that unencrypted Telnet is a good idea. Trust me, you want SSH.
So fire up a terminal (yes, a terminal, you’re in server-land now buddy) and copy/paste the ssh command to get connected to your new instance.
Next you will be prompted for your password, it’s back on the web page, just copy/paste it into the terminal. Remember how I said earlier that having a Launchpad profile with uploaded SSH keys would give a better experience? Well if you’ve got all that, then you won’t see this step. You see, part of the initialization that happened when your new instance started was to download your public SSH keys from Launchpad, allowing you to use your private SSH keys for authentication. Nice huh?
So now you’re connected to your new instance. But what’s all that stuff at the bottom of your terminal now? That my friend, is Byobu, Ubuntu’s highly customized profile for GNU Screen. Describing all of it’s wonderful goodness would take another full blog post, so I’ll just point out the highlights. F2 creates a new “tab”, and you can switch between them with F3/F4. Down at the bottom are some system monitoring widgets for things like load and memory usage. In green characters is a special widget for Amazon EC2 that gives you an estimate of how much your instance is costing you (a whopping $0.09 USD at this point), and in blue characters is a clock showing how long your trial has been running (so you’ll know how much of your 55 minutes you have used).
Alright, your instance is running and configured, now what? Well, did you see that link on the web page?
That one, next to “Try going to”. Click on it.
Well look at that, it’s your new WordPress site, just waiting for you to give it a name (and also username and password). No unzipping, copying, apache configs, database setup, nothing. We literally can’t make it any easier.
So pick a name, give it a password and email, and you’re all set! Yes, I know my password was weak. Actually my password was ‘password’. Hey, it was only up for 55 minutes, I’m not going to spend extra time thinking of a secure password. Come on, we’ve got a WordPress site to play with!
There you are, your new WordPress site is deployed. And how are we doing on time? Well if you hadn’t spent so much time reading along with this article, you’d have at least 45 minutes left out of the original 55. Heck, I wasted a bunch of time taking screenshots along the way, and I still had more than 40 remaining. How far do you think your sysadmin would have gotten in this same amount of time? He probably just got back form refilling his coffee (which, to his credit, really is necessary before attempting a deployment the old fashioned way).
After a while, as you get close to the end of your trial period, you’ll get these helpful messages in your terminal session, letting you know how much longer you have. And before you start thinking that you can use your mighty sudo powers to stop your instance’s termination, sorry pal, but we keep track of them on our end too, and your instance will be killed through the same EC2 API that launched it. But I sure hope you had fun.
So now you’ve seen how fast and easy it is to deploy not just Ubuntu Server in the cloud, but actual, useful services running on top of it. We offer you three popular software packages for websites, but those are only the tip of the iceberg. You can write cloud-init scripts for anything you want to deploy on Ubuntu, even your own in-house build applications. Then you too can deploy into the cloud with the click of a button.
What’s that you say? You don’t have a handy dandy webapp for one-click deployments into the cloud? Oh but you do! You see, everything you just saw is open source, you can download it from lp:awstrial on Launchpad. Use it to run your own trials, or just to learn how we did it so you can write your own internal provisioning service. It defaults to using Amazon’s EC2 cloud, but you can point it to any EC2 compatible cloud. We ran it internally against an OpenStack EC2 cloud during development and testing.
Did you enjoy your trial? Leave us some feedback on what you liked, what you didn’t, and what you want us to offer in the future.Read more
Summit is the code that runs the session scheduler for the Ubuntu Developer Summit (UDS) and, as of last cycle, the Linaro Summit as well. Summit has had a rather troubled past, being passed from one maintainer to another, evolving organically as it went. But during UDS-N, it started gaining a team of community contributors, specifically Chris Johnston and I. This expanded further for UDS-O, when Nigel Babu took the helm as the project manager. We were also joined by Linaro developers who wanted to make Summit support two simultaneous events, using the same schedule, the same rooms and the same attendees.
Many changes were made in the run-up to UDS-O, and by “run-up” I mean all the way up to the first day of sessions. Unfortunately, nowhere along Summit’s organic growth did it gain the proper test suite and deployment processes that are a necessity for a project of this size. In fact, one of the bugs that was discovered during UDS-O was a script running on the server that wasn’t even part of Summit’s revision control tree!
Well this part of Summit’s history is coming to an end. After UDS-O, the community developers started to plan out how to stabilize Summit, both it’s code base by adding testing, and also the deployment process by strictly managing how new code gets into production.
The bug fixing started early this cycle. Nigel was submitting merge proposals by the end of the week of UDS-O, and Chris and I were pair-programming on the flight from JFK back to Orlando. So far there have been 30 branch merges into the summit tree and fixes for 20 bug reports. Nigel gives the full list over at his blog.
Setup and Development
Summit can now be easily setup for development using Virtualenv, which makes getting started with development significantly easier. LoCo Directory recently gained a script that fully automated the setup of a development environment, and this will soon be coming to the Summit code. At the time I’m writing this post, Jorge Castro has even begun work on an Ensemble formula, that will make deploying a fully configured instance of Summit on Amazon’s EC2 platform a matter of a few simple commands.
Making development setup easier lowers the barrier to new contributors, and we hope this will encourage more community members to get involved in such a fun and important project. Making sure we’re all using the same development environment, and having it easily replicated for others to develop and test, will help improve the accessibility and stability of our code.
During UDS-O we got some help setting up and writing the very first testing code for Summit. From now on, writing test cases for new features or bug fixes will become a normal part of our development process. We recently held an online classroom session about how to write test code for Summit (and LoCo Directory too). There is still a lot of Summit code that needs tests written for it, but we’re going to cover as much of that as we can while continuing to move forward with development. More than any other change this cycle, I’m excited about the huge improvements to stability that we can gain through aggressively testing our code.
Branch based deployments
Summit has always used branch-based deployments, that is our production server has a copy of our bzr tree that it runs from, instead of a package that gets installed. Unfortunately, up until last week the only branch we really had was trunk, which made it harder to properly track emergency fixes when we already had revisions committed to trunk that weren’t ready to be deployed. To fix this we’ve split off a production branch, which is the only branch we will deploy from, and will always have a copy of the exact code that is running in production.
We will also, for the short term, have two branches for development. The 1.x branch is our “stable” tree, that’s where we will make any changes that will be ready to deploy in the coming days or weeks. This means that we can use our trunk branch for long-term development, where we can perform some much-needed refactoring and code cleanup, without worrying about blocking deployments while these changes settle into place. There are some major and necessary changes coming to parts of the Summit code, and this development setup will let us start landing those quickly so that we can test them and build off them, without destabilizing the currently used code tree or blocking minor fixes from being deployed.
Ubuntu Website integration
If you visit the Summit website today, you’ll already see some of our recent changes. To better integrate with the WordPress instance running uds.ubuntu.com, we have changed our main navigation and 960px width to match. Once the WordPress theme updates are rolled out, both sites will have the new community top navigation bar too. No longer will it feel like you’re being thrown from one site to another without a means of getting back. This should lead to a less confusing user experience for both sites, and much happier UDS attendees all around.
This past weekend was Ubuntu Global Jam, where Ubuntu users and contributors all over the world get together to work on improving the project. Jams come in many forms, code hacking, bug triaging, translating, documenting, or even just promoting Ubuntu in their community. In my own corner of the Ubuntu community, a few of us got to together to work on improving the Summit project
This is the code behind http://summit.ubuntu.com, which provides the UDS scheduler and sponsorship application forms. Summit is a Django application, released under the AGPLv3 license, and is primarily developed by community members. Joining me were Chris Johnston, a frequent community contributor who I’ve also worked with in LoCo Directory and other projects, and Elliot Murphy, my 3rd-level boss as Canonical (no pressure there!).
Here’s a list of what we managed to accomplish:
Switch to the new ubuntu-community-webthemes, which will give us the “mothership” top-navigation links as seen on planet.ubuntu.com and wiki.ubuntu.com
Started work on integrating Summit with Django testing framework.
Bug #643012: Register Interest should only show currently available tracks
Currently when you register your interest in a track, the form shows tracks for previous summits. This will restrict it to just the tracks for the summit you’re registering for.
Bug #668532: /today page to display current day’s schedule
A new, permanent URL which will show the current day’s schedule, so you can bookmark it once and re-use it for each day of the summit, and even future summits!
Bug #745378: Empty sub-nav exists on sponsorship page
Removes the gray sub-navigation bar from pages where there aren’t any linkes in it.
Bug #462793: Add slots for videographers
Up to two videographers can not be assigned to a UDS session and their names will appear on the schedule.
Bug #747296: Add plenary flag to iCal feed for conventionist.com
We have been working with the makers of Conventionist, a convention management application, which will allow you to track your session schedule on your Android or iPhone, even getting directions to the correct room. This fix was necessary for them to distinguish plenary sessions from regular ones.
Bug #747301: Add daily Crew list
Allows Summit to schedule which UDS attendees are willing to act as event crew, with the current day’s crew assignments listed on the daily schedule which is displayed on the large monitors during the event.
Bug #747303: Auto-add slots to schedule
This solved an administrative headache for those organizing the summit. For past events, every available time slot had to be entered manually, which was a very time consuming task. This provides them a quick way to pre-populate the time slots, with the ability to fine-tune just the ones that need it.
Bug #747419: Fix login redirect
Several features of Summit require that you log in using your SSO/Launchpad account. However, after login you are currently redirected back to the main Summit page instead of the page you left. This sends your current page URL as the path to redirect to after a successful login, so you no longer have to go find that page again.
© 2010 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.