Canonical Voices

Posts tagged with 'ubuntu'

Nicholas Skaggs

Virtual Hugs of appreciation!

Because I was asleep at the wheel (err, keyboard) yesterday I failed to express my appreciation for some folks. It's a day for hugging! And I missed it!

I gave everyone a shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!

Thank you to:

David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.

And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!

To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!

Read more
David Henningsson

PulseAudio buffers and protocol

This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release.

PulseAudio memory copies and buffering

PulseAudio is said to have a “zero-copy” architecture. So let’s look at what copies and buffers are involved in a typical playback scenario.

Client side

When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM.

Server resampling and remapping

On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary.

First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements).
In case resampling is necessary, we make use of external resampler libraries for this, the default being speex.
Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports.

So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped.

Mixing and hardware output

PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers.

Summary

The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice.

However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point:

Protocol improvements in 6.0

PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet.

For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back.
For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve.

So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls.

From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side.

Read more
Michael Hall

When things are moving fast and there’s still a lot of work to do, it’s sometimes easy to forget to stop and take the time to say “thank you” to the people that are helping you and the rest of the community. So every November 20th we in Ubuntu have a Community Appreciation Day, to remind us all of the importance of those two little words. We should of course all be saying it every day, but having a reminder like this helps when things get busy.

Like so many who have already posted their appreciation have said, it would be impossible for me to thank everybody I want to thank. Even if I spent all day on this post, I wouldn’t be able to mention even half of them.  So instead I’m going to highlight two people specifically.

First I want to thank Scarlett Clark from the Kubuntu community. In the lead up to this last Ubuntu Online Summit we didn’t have enough track leads on the Users track, which is one that I really wanted to see more active this time around. The track leads from the previous UOS couldn’t do it because of personal or work schedules, and as time was getting scarce I was really in a bind to find someone. I put out a general call for help in one of the Kubuntu IRC channels, and Scarlett was quick to volunteer. I really appreciated her enthusiasm then, and even more the work that she put in as a first-time track lead to help make the Users track a success. So thank you Scarlett.

Next, I really really want to say thank you to Svetlana Belkin, who seems to be contributing in almost every part of Ubuntu these days (including ones I barely know about, like Ubuntu Scientists). She was also a repeat track lead last UOS for the Community track, and has been contributing a lot of great feedback and ideas on ways to make our amazing community even better. Most importantly, in my opinion, is that she’s trying to re-start the Ubuntu Leadership team, which I think is needed now more than ever, and which I really want to become more active in once I get through with some deadline-bound work. I would encourage anybody else who is a leader in the community, or who wants to be one, to join her in that. And thank you, Svetlana, for everything that you do.

It is both a joy and a privilege to be able to work with people like Scarlett and Svetlana, and everybody else in the Ubuntu community. Today more than ever I am reminded about how lucky I am to be a part of it.

Read more
Daniel Holbach

Appreciation for Michael Hall

Today marks another Ubuntu Community Appreciation Day, one of Ubuntu’s beautiful traditions, where you publicly thank people for their work. It’s always hard to pick just one person or a group of people, but you know what – better appreciate somebody’s work than nobody’s work at all.

One person I’d like to thanks for their work is Michael Hall. He is always around, always working on a number of projects, always involved in discussions on social media and never shy to add yet another work item to his TODO list. Even with big projects on his plate, he is still writing apps, blog entries, charms and hacks on a number of websites and is still on top of things like mailing list discussions.

I don’t know how he does it, but I’m astounded how he gets things done and still stays friendly. I’m glad he’s part of our team and tirelessly working on making Ubuntu a better place.

I also like this picture of him.

cat5000

Mike: keep up the good work! :-)

Read more
Michael Hall

Last week was our second ever Ubuntu Online Summit, and it couldn’t have gone better. Not only was it a great chance for us in Canonical to talk about what we’re working on and get community members involved in the ongoing work, it was also an opportunity for the community to show us what they have been working on and give us an opportunity to get involved with them.

Community Track leads

This was also the second time we’ve recruited track leads from among the community. Traditionally leading a track was a responsibility given to one of the engineering managers within Canonical, and it was up to them to decide what sessions to put on the UDS schedule. We kept the same basic approach when we went to online vUDS. But starting with UOS 14.06, we asked leaders in the community to help us with that, and they’ve done a phenomenal job. This time we had Nekhelesh RamananthanJosé Antonio ReySvetlana BelkinRohan GargElfy, and Scarlett Clark take up that call, and they were instrumental in getting even more of the community involved

Community Session Hosts

uos_creatorsMore than a third of those who created sessions for this UOS were from the community, not Canonical. For comparison, in the last in-person UDS, less than a quarter of session creators were non-Canonical. The shift online has been disruptive, and we’ve tried many variations to try and find what works, but this metric shows that those efforts are starting to pay off. Community involvement, indeed community direction, is higher in these Online Summits than it was in UDS. This is becoming a true community event: community focused, community organized, and community run.

Community Initiatives

The Ubuntu Online Summit wasn’t just about the projects driven by Canonical, such as the Ubuntu desktop and phone, there were many sessions about projects started and driven by members of the community. Last week we were shown the latest development on Ubuntu MATE and KDE Plasma 5 from non-Canonical lead flavors. We saw a whole set of planning sessions for community developed Core Apps and an exciting new Component Store for app developers to share bits of code with each other. For outreach there were sessions for providing localized ISOs for loco teams and expanding the scope of the community-lead Start Ubuntu project. Finally we had someone from the community kick off a serious discussion about getting Ubuntu running on cars. Cars! All of these exciting sessions were thought up by, proposed by, and run by members of the community.

Community Improvements

This was a great Ubuntu Online Summit, and I was certainly happy with the increased level of community involvement in it, but we still have room to make it better. And we are going to make it better with help from the community. We will be sending out a survey to everyone who registered as attending for this UOS to gather feedback and ideas, please take the time to fill it out when you get the link. If you attended but didn’t register there’s still time, go to the link above, log in and save your attendance record. Finally, it’s never too early to start thinking about the next UOS and what sessions you might want to lead for it, so that you’re prepared when those track leads come knocking at your door.

Read more
mitechie

A couple of people have reached out to me via LinkedIn and reminded me that my three year work anniversary happened last Friday. Three years since I left my job at a local place to go work for the Canonical where I got the chance to be paid to work on open source software and better my Python skills with the team working on Launchpad. My wife wasn’t quite sure. “You’ve only been at your job a year and a half, and your last one was only two years. What makes this different?”

What’s amazing, looking back, is just how *right* the decision turned out to be. I was nervous at the time. I really wasn’t Launchpad’s biggest fan. However, the team I interviewed with held this promise of making me a better developer. They were doing code reviews of every branch that went up to land. They had automated testing, and they firmly believed in unit and functional tests of the code. It was a case of the product didn’t excite me, but the environment, working with smart developers from across the globe, was exactly what I felt like I needed to move forward with my career, my craft.

2013-09-02 18.17.47

I joined my team on Launchpad in a squad of four other developers. It was funny. When I joined I felt so lost. Launchpad is an amazing and huge bit of software, and I knew I was in over my head. I talked with my manager at the time, Deryck, and he told me “Don’t worry, it’ll take you about a year to get really productive working on Launchpad.” A year! Surely you jest, and if you’re not jesting…wtf did I just get myself into?

It was a long road and over time I learned how to take a code review (a really hard skill for many of us), how to do one, and how to talk with other smart and opinionated developers. I learned the value of the daily standup, how to manage work across a kanban board. I learned to really learn from others. Up until this point I’d always been the big fish in a small pond and suddenly I was the minnow hiding in the shallows. Forget books on how to code, just look at the diff in the code review you’re reading right now. Learn!

My boss was right, it was nearly ten months before I really felt like I could be asked to do most things in Launchpad and get them done in an efficient way. Soon our team was moved on from Launchpad to other projects. It was actually pretty great. On the one hand, “Hey! I just got the hang of this thing” but, on the other hand, we were moving on to new things. Development life here has never been one of sitting still. We sit down and work on the Ubuntu cycle of six month plans, and it’s funny because even that is such a long time. Do you really know what you’ll be doing six months from now?

P1100197.jpg

Since that time in Launchpad I’ve gotten work on several different projects and I ended up switching teams to work on the Juju Gui. I didn’t really know a lot about this Juju thing, but the Gui was a fascinating project. It’s a really large scale JavaScript application. This is no “toss some jQuery on a web page” thing here.

I also moved to work under a new manager Gary. As my second manager since starting at Canonical and I was amazed at my luck. Here I’ve had two great mentors that made huge strides in teaching me how to work with other developers, how to do the fun stuff, the mundane, and how to take pride in the accomplishments of the team. I sit down at my computer every day and I’ve got the brain power of amazing people at my disposal over irc, Google Hangouts, email, and more. It’s amazing to think that at these sprints we do, I’m pretty much never the smartest person in the room. However, that’s what’s so great. It’s never boring and when there’s a problem the key is that we put our joint brilliant minds to the problem. In every hard problem we’ve faced I’ve never found that a single person had the one true solution. What we come up with together is always better than what any of us had apart.

When Gary left and there was a void for team lead and it was something I was interested in. I really can’t say enough awesome things about the team of folks I work with. I wanted to keep us all together and I felt like it would be great for us to try to keep things going. It was kind of a “well I’ll just try not to $#@$@# it up” situation. That was more than nine months ago now. Gary and Deryck taught me so much, and I still have to bite my tongue and ask myself “What would Gary do” at times. I’ve kept some things the same, but I’ve also brought my own flavor into the team a bit, at least I like to think so. These days my Github profile doesn’t show me landing a branch a day, but I take great pride in the progress of the team as a whole each and every week.

The team I run now is as awesome a group of people, the best I could hope to work for. I do mean that, I work for my team. It’s never the other way around and that’s one lesson I definitely picked up from my previous leads. The projects we’re working on are exciting and new and are really important to Canonical. I get to sit in and have discussions and planning meetings with Canonical super genius veterans like Kapil, Gustavo, and occasionally Mark Shuttleworth himself.

Looking back I’ve spent the last three years becoming a better developer, getting an on the job training course on leading a team of brilliant people, and crash course on thinking about the project, not just as the bugs or features for the week, but for the project as it needs to exist in three to six months. I’ve spent three years bouncing between “what have I gotten myself into, this is beyond my abilities” to “I’ve got this. You can’t find someone else to do this better”. I always tell people that if you’re not swimming as hard as you can to keep up, find another job. I feel like three years ago I did that and I’ve been swimming ever since.

P1040511.jpg

Three years is a long time in a career these days. It’s been a wild ride and I can’t thank the folks that let me in the door, taught me, and have given me the power to do great things with my work enough. I’ve worked by butt off in Budapest, Copenhagen, Cape Town, Brussels, North Carolina, London, Vegas, and the bay area a few times. Will I be here three years from now? Who knows, but I know I’ve got an awesome team to work with on Monday and we’ll be building an awesome product to keep building. I’m going to really enjoy doing work that’s challenging and fulfilling every step of the way.

DSC00329


Read more
David Planella

Over a week ago, we announced the Ubuntu Scope Showdown: a competition to write a scope for Ubuntu on phones in 5 weeks and win exciting prizes.

Scopes are Ubuntu’s innovative take at revolutionizing the content and services experience. For users, they provide quick and intuitive access to content without the need of loading an app. For developers and operators, scopes provide an easy path to surface their content and customize the UX in a way that is very flexible and integrated.

After the initial contest kickoff, we’ve already had a number of participants blogging, sharing updates and teasers about their work. Here’s a peek at some of their progress.

A variety of scopes

In the words of Robert Schroll, of Beru fame, e-mail apps are just passé. So much that he decided to explore an interesting concept: reading your e-mail with a scope. With a nice extra touch: Ubuntu Online accounts integration.

 Because e-mail apps are so 90s - the Gmail scope


Because e-mail apps are so 90s – the Gmail scope

After listening to one of Daniel Holbach’s mixes, Bogdan Cuza thought they alone deserve a scope, and so the Mixcloud scope was born. The rest, as they say, is history.

Can't get enough of those Balkan Beats - the Mixcloud scope

Can’t get enough of those Balkan Beats – the Mixcloud scope

You don’t know where to eat tonight? No worries, Sam Segers has you covered. Check out his Google places scope to easily find somewhere new to go.

Your cooking skills not up to your date's expectations? The Google places scope comes to the rescue

Your cooking skills not up to your date’s expectations? The Google places scope comes to the rescue

Developer Dan has a treat for all of us movie lovers: the Cinema scope. Features categories and departments, with settings, TV series and genres coming up soon! Check out the details on his blog.

Helping Ubuntu users see what stuff dreams are made of since 2014 - the Cinema scope

Helping Ubuntu users watch stuff dreams are made of since 2014 – the Cinema scope

Riccardo Padovani is bringing the dark horse -or well, duck?- of search engines into Ubuntu. Armed with the DuckDuckGo scope, get results like a pro with “real privacy, smarter search and less clutter”.

Duck is the new black - the DuckDuckGo scope

Duck is the new black – the DuckDuckGo scope

A wishlist of scopes

As Alan Pope and Michael Hall, I do have my wishlist of scopes for content that I’d like to have accessible at a flick of the finger on my phone. Maybe someone of you can make our day?

  • 8tracks scope: I love music, and I love mixes. 8tracks is a music streaming service to listen to the mixes their community members create and to get creative submitting mixes. As an avid mixer and listener, I’d be using this all of the time, especially if it came with Online Accounts integration that showed me content relevant to my interests.
  • Ask Ubuntu scope: the biggest Ubuntu Q&A site. I regularly check the ‘application-development‘ tag there to see any new questions and if I can help a fellow Ubuntu developer (and you should too). It’d be absolutely awesome to get those updates easily on my phone screen, with settings to filter on tags and the ability to upvote/downvote questions and answers.

Not sure what to write a scope for yet? Well, check out the ideas over at the Showdown reddit, or let your imagination run wild with a comprehensive list of APIs to get more inspiration!

A prize for your scopes

It’s not too late to enter the Showdown, you too can write a scope and win prizes! Here are some tips to get started:

Looking forward to seeing the next batch of scopes participants come up with!

The post The Ubuntu Scope Showdown – progress showcase appeared first on David Planella.

Read more
Daniel Holbach

Ubuntu Online Summit: 12-14 November

Yet another Ubuntu Online Summit (UOS) is ahead of us. It’s going to happen from 12-14 November. Participation is open to everyone, so to attend simply:

If you still need to get a session on the schedule to discuss a topic related to your field, create the session soon!

What I love about the Ubuntu Online Summit is that people get together, invite some fresh sets of eyes and brains and figure out together where Ubuntu is going. The sessions are also not too long (1h), so you are forced to come conclusions (and work items!) quickly.

Sessions I’m particularly looking forward to are:

  • 12 Nov
    • 15 UTC – Community Roundtable
    • 15 UTC – Testing Unity 8 Desktop
    • 16 UTC – App/Scope development training events
    • 18 UTC – Community events in Vivid cycle
    • 19 UTC – More appdev/scope code examples
  • 13 Nov
    • 16 UTC – Community Council Feedback
    • 16 UTC – Porting Apps To Ubuntu
    • 18 UTC – Ubuntu Women Vivid Goals
    • 19 UTC – Ubuntu Community Q&A
  • 14 Nov
    • 14 UTC – Transparency and participation
    • 15 UTC – Promoting the Ubuntu phone in LoCos
    • 16 UTC – LoCo Team Activity Review
    • 18 UTC – Ubuntu Touch Component Store

Please note: session times might still be changed, so keep an eye on the schedule. (Also: there’s lots more good stuff!)

Looking forward to seeing you all there! :-D

Read more
Dustin Kirkland

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.


Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Read more
Michael Hall

A couple of weeks ago we announced the start of a contest to write new Unity Scopes. These are the Dash plugins that let you search for different kinds of content from different sources. Last week Alan Pope posted his Scopes Wishlist detailing the ones he would like to see. And while I think they’re all great ideas, they didn’t particularly resonate with my personal use cases. So I’ve decided to put together a wishlist of my own:

Ubuntu Community

I’ve started on one of these in the past, more to test-drive the Scope API and documentation (both of which have changed somewhat since then), but our community has a rather large amount of content available via open APIs or feeds, that could be combined into making one really great scope. My attempt used the LoCo Team Portal API, but there is also the Planet Ubuntu RSS feed (also feeds from a number of other websites), iCal feeds from Summit, a Google calendar for UbuntuOnAir, etc. There’s a lot of community data out there just waiting to be surfaced to Ubuntu users.

Open States

My friend Paul Tagliamante works for the Sunlight Foundation, which provides access to a huge amount of local law and political data (open culture + government, how cool is that?), including the Open States website which provides more local information for those of us in the USA. Now only could a scope use these APIs to make it easy for us citizens to keep up with that’s going on in our governments, it’s a great candidate to use the Location information to default you to local data no matter where you are.

Desktop

This really only has a purpose on Unity 8 on the desktop, and even then only for a short term until a normal desktop is implemented. But for now it would be a nice way to view your desktop files and such. I think that a Scope’s categories and departments might provide a unique opportunity to re-think how we use the desktop too, with the different files organized by type, sorted by date, and displayed in a way that suits it’s content.

There’s potential here to do some really interesting things, I’m just not sure what they are. If one of you intrepid developers has some good ideas, though, give it a shot.

Comics

Let’s be honest, I love web comics, you love web comics, we all love web comic. Wouldn’t it be super awesome if you got the newest, best webcomics on your Dash? Think about it, get your XKCD, SMBC or The Oatmeal delivered every day. Okay, it might be a productivity killer, but still, I’d install it.

Read more
Michael Hall

Next week we will be kicking off the November 2014 Ubuntu Online Summit where people from the Ubuntu community and Canonical will be hosting live video sessions talking about what is being worked on, what is currently available, and what the future holds across all of the Ubuntu ecosystem.

uos_scheduleWe are in the process of recruiting sessions and filling out the Summit Schedule for this event, which should be finalized at the start of next week. You can register that you are attending on the Summit website, where you can also mark specific sessions that you are interested in and get a personalized view of your schedule (and an available iCal feed too!) UOS is designed for participation, not just consumption. Every session will have active IRC channel that goes along with it where you can speak directly to the people on video. For discussion sessions, you’re encouraged to join the video yourself when you want to join the conversation.

Moreover, we want you to host sessions! Anybody who has an idea for a good topic for conversation, presentation, or planning and is willing to host the video (meaning you need to run a Google On-Air Hangout) can propose a session. You don’t need to be a Canonical employee, project leader, or even an Ubuntu member to run a session, all you need is a topic and a willingness to be the person to drive it. And don’t worry, we have track leads who have volunteered to help you get it setup.

These sessions will be split into tracks, so you can follow along with the topics that interest you. Or you can jump from track to track to see what everybody else in the community is doing. And if you want to host a session yourself, you can contact any one of the friendly Track Leads, who will help you get it registered and on the schedule.

Ubuntu Development

Those who have participated in the Ubuntu Developer Summit (UDS) in the past will find the same kind of platform-focused topics and discussions in the Ubuntu Development track. This track covers everything from the kernel to packaging, desktops and all of the Ubuntu flavors.

The track leads are: Will CookeŁukasz ZemczakSteve LangasekAntonio Rosales, and Rohan Garg

App & Scope Development

For developers who are targeting the Ubuntu platform, for both apps and Unity scopes, we will be featuring a number of presentations on the current state of the tools, APIs and documentation, as well as gathering feedback from those who have been using them to help us improve upon them in Ubuntu 15.04. You will also see a lot of planning for the Ubuntu Core Apps, and some showcases of other apps or technologies that developers are creating.

The track leads are: Tim PeetersMichael HallAlan Pope, and Nekhelesh Ramananthan

Cloud & DevOps

Going beyond the core and client side, Ubuntu is making a lot of waves in the cloud and server market these days, and there’s no better place to learn about what we’re building (and help us build it) that the Cloud & Devops track. Whether you want to roll out your own OpenStack cloud, or make your web service easy to deploy and scale out, you will find topics here that interest you.

The track leads are: Antonio RosalesMarco CeppiPatricia Gaughen, and José Antonio Rey

Community

The Ubuntu Online Summit is itself a community coordinated event, and we’ve got a track dedicated to helping us improve and grow the whole community. You can use this to showcase the amazing work that your team has been doing, or plan out new events and projects for the coming cycle. The Community Team from canonical will be there, as well as members of the various councils, flavors and boards that provide governance for the Ubuntu project.

The track leads are: David PlanellaDaniel HolbachSvetlana Belkin, and José Antonio Rey

Users

And of course we can’t forget about our millions or users, we have a whole track setup just to provide them with resources and presentations that will help them make the most out Ubuntu. If you have been working on a project for Ubuntu, you should think about hosting a session on this track to show it off. We’ll also be hosting several feedback session to hear directly from users about what works, what doesn’t, and how we can improve.

The track leads are: Nicholas SkaggsElfy, and Scarlett Clark

Read more
Dustin Kirkland

Say it with me, out loud.  Lex.  See.  Lex-see.  LXC.

Now, change the "see" to a "dee".  Lex.  Dee.  Lex-dee.  LXD.

Easy!

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.



Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Read more
beuno

As the pieces start to come together and we get closer to converging mobile and desktop in Ubuntu, Click packages running on the desktop start to feel like they will be a reality soon (Unity 8 brings us Click packages). I think it's actually very exciting, and I thought I'd talk a bit about why that is.

First off: security. The Ubuntu Security team have done some pretty mind-blowing work to ensure Click packages are confined in a safe, reliable but still flexible manner. Jamie has explained how and why in a very eloquent manner. This will only push further an OS that is already well known and respected for being a safe place to do computing for all levels of computer skills.
My second favorite thing: simplification for app developers. When we started sketching out how Clicks would work, there was a very sharp focus on enabling app developers to have more freedom to build and maintain their apps, while still making it very easy to build a package. Clicks, by design, can't express any external dependencies other than a base system (called a "framework"). That means that if your app depends on a fancy library that isn't shipped by default, you just bundle it into the Click package and you're set. You get to update it whenever it suits you as a developer, and have predictability over how it will run on a user's computer (or device!). That opens up the possibility of shipping newer versions of a library, or just sticking with one that works for you. We exchange that freedom for some minor theoretical memory usage increases and extra disk space (if 2 apps end up including the same library), but with today's computing power and disk space cost, it seems like a small price to pay to empower application developers.
Building on top of my first 2 favorite things comes the third: updating apps outside of the Ubuntu release cycle and gaining control as an app developer. Because Click packages are safer than traditional packaging systems, and dependencies are more self-contained, app developers can ship their apps directly to Ubuntu users via the software store without the need for specialized reviewers to review them first. It's also simpler to carry support for previous base systems (frameworks) in newer versions of Ubuntu, allowing app developers to ship the same version of their app to both Ubuntu users on the cutting edge of an Ubuntu development release, as well as the previous LTS from a year ago. There have been many cases over the years where this was an obvious problem, OwnCloud being the latest example of the tension that arises from the current approach where app developers don't have control over what gets shipped.
I have many more favorite things about Clicks, some more are:
- You can create "fat" packages where the same binary supports multiple architectures
- Updated between versions is transactional so you never end up with a botched app update. No more holding your breath while an update installs, hoping your power doesn't drop mid-way
- Multi-user environments can have different versions of the same app without any problems
- Because Clicks are so easy to introspect and verify their proper confinement, the process for verifying them has been easy to automate enabling the store to process new applications within minutes (if not seconds!) and make them available to users immediately

The future of Ubuntu is exciting and it has a scent of a new revolution.

Read more
Nicholas Skaggs

Ubuntu Online Summit: Vivid Edition

Ubuntu Online Summit is once again upon us. This is a community event by and for the community. It's all encompassing and intends to cover a wide range of topics. You don't need to be a developer, project lead, member of a team, or even a member of ubuntu to join and participate. The only requirement is your passion for ubuntu and desire to discuss about it's future with others.

The dates are set as November 12-November 14th from 1400 UTC to 2000 UTC. I am once again privileged to be a track lead for the users track. In my opinion, this is the best track as it's the one the largest number of us within the community can easily feel a part of (just don't like Michael, David, Daniel or Alan know I said that). Do you use ubuntu? Awesome, this is the track for you.

What I'm asking for is sessions. Have an idea for a session? Please propose it! Everything you need to know about participating can be found here. If you've attended things like ubuntu open week or a classroom session in the past, all of those types of sessions are welcome and encouraged.

"The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server. From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level."

Regardless of your desire to contribute a session, I would encourage everyone to take a look at the schedule as it evolves and considering joining in sessions they find interesting. In addition, it's not yet too late to offer up ideas for sessions (though I would encourage you to find a way to host the session).

Ready to propose a session? Checkout this page and feel free to ping me or any track lead for help. Don't forget to register to attend and check out the currently scheduled sessions!

Read more
Nicholas Skaggs

Autopilot Feature Primer

Autopilot celebrated it's 2 year anniversary as an independent project this summer. During that time it has developed into a useful tool for testing application UI's for gtk and qt toolkits. Support has also been extended to MIR as well as phablet devices.

With this in mind, I thought it would be useful to bring attention to some new and under-used features of autopilot, along with providing a brief explanation of some companion tools you might find useful. Thus I present to you, an autopilot primer. Let's talk through some new features shall we?

Python3 Support
Autopilot started as a python2 tool but has since migrated to python3 and you should too! For now the entire source tree remains python2 compatible, but you really should migrate your tests to python3. You'll notice the autopilot3 binary in newer releases which should be used to run autopilot with python3.

Scenario Support
Scenarios are a wonderful way to keep your tests simple and easy to read while allowing you to test with many different inputs. In short, you might need to test several edge cases as part of your acceptance testing. This is most easily accomplished by keeping the test itself generic and utilizing a scenario to vary your inputs. You can check out more information on scenarios specific to autopilot in the autopilot documentation.

Screenshots / Video
Autopilot allows you to get a video recording of a test failure. To make sure autopilot records failures, install recordmydesktop and pass the -r argument to your autopilot3 run command. However, at the moment this requires X so for now it doesn't work with the MIR backend (which things like the ubuntu phone utilize). Fortunately a screenshot at the point of failure when combined with the log is generally sufficient to solving your issue. Getting those screenshots brings us to subunit support.

Subunit Support
By default autopilot generates the test output and logs straight to your console in a text format. However autopilot also supports outputting to xml and subunit. Subunit support is what I would like to highlight for a few reasons. When you set the output format as subunit, you get a few niceties. One of which is an easier to grok format for tools, and the other is screenshots of the application when failures occur. To get a subunit stream, pass the -f subunit argument to your autopilot3 run command. You will want to also pass -O with a filename to save the output to a file as the subunit stream contains binary data.

Test Result Viewer
So, with this subunit test results file, how can you enjoy all of it's goodness? Enter trv, a simple python ui that will let organize the test run in an easy to view manner, including screenshots. The tool is the creation of Thomi Richards who describes it as a quick hack (:p), and has a youtube video demonstrating it's use. It's perfect for viewing the subunit stream and visualizing your test results. For now, it's not packaged but can be easily obtained via launchpad. Grab it with bzr branch lp:trv.

autopilot3 vis
The vis tool allows you to visually interact with the introspection tree after launching an application using autopilot launch. What this means is you can visualize the application in the same way autopilot does at runtime, with live tree updates. It lets you see what autopilot sees, allowing you to interactively build your testcase.

I'll refer you the official tutorial for more information, as well as a youtube video by yours truly. It's from a livestream, but covers what you need to know. autopilot3 vis also contains a search box, and a highlight tool which didn't exist in the orignal version, so it's even nicer now than before. Give it a whirl!

autopilot3-sandbox-run
I talked about this utility when I covered the test runners for autopilot. Still I would be remiss if I didn't mention it again. Everything I said in the test runners for autopilot post still applies, so go have a quick read about how to use the tool if you need more information. This tool enables you to easily run autopilot tests on the desktop in a nested xserver. What that means to you as a test author is that you can run tests without giving up your desktop session. No more waiting for autopilot to hand back control of your mouse after a test. If you are writing tests, you should be using this tool along with autopilot vis mentioned above during your test writing process.

Per test timeout
Although we all only write "good tests", sometimes you may find your test misbehaves. When this happens the test may even not exit cleanly or get stuck in a loop. The result is autopilot and the system under test to wait forever for the test to finish. To prevent a rouge test from killing a test suite run, autopilot is introducing support for per-test timeouts. This has landed in vivid; you'll need version 1.5.0+15.04.20141031-0ubuntu1 or later. To use the feature, add the --test-timeout argument to autopilot run and give is a timeout in seconds.

In conclusion
Autopilot has gotten many new features along the way, and these are but a few of the most recent and important ones. I hope this helps you take another look at what autopilot might be able to help you test. Happy Testing!

Read more
Ben Howard

We are pleased to announce that Ubuntu 12.04 LTS, 14.04 LTS, and 14.10 are now in beta on Google Compute Engine [1, 2, 3].

These images support both the traditional user-data as well the Google Compute Engine startup scripts. We have included the Google Cloud SDK, pre-installed as well. Users coming from other Clouds can expect to have the same great experience as on other clouds, while enjoying the features of Google Compute Engine.

From an engineering perspective, a lot of us are excited to see this launch. While we don't expect too many rough edges, it is a beta, so feedback is welcome. Please file bugs or join us in #ubuntu-server on Freenode to report any issues (ping me, utlemming, rcj or Odd_Bloke).

Finally, I wanted to thank those that have helped on this project. Launching a cloud is not an easy engineering task. You have have build infrastructure to support the new cloud, create tooling to build and publish, write QA stacks, and do packaging work. All of this spans multiple teams and disciplines. The support from Google and Canonical's Foundations and Kernel teams have been instrumental in this launch, as well the engineers on the Certified Public Cloud team.

Getting the Google Cloud SDK:

As part of the launch, Canonical and Google have been working together on packaging a version of the Google Cloud SDK. At this time, we are unable to bring it into the main archives. However, you can find it in our partner archive.

To install it run the following:

  • echo "deb http://archive.canonical.com/ubuntu $(lsb_release -c -s) partner" | sudo tee /etc/apt/sources.list.d/partner.list
  • sudo apt-get update
  • sudo apt-get -y install google-cloud-sdk


Then follow the instruction for using the Cloud SDK at [4]


[1] https://cloud.google.com/compute/docs/operating-systems#ubuntu
[2] http://googlecloudplatform.blogspot.co.uk/2014/11/curated-ubuntu-images-now-available-on.html
[3] http://insights.ubuntu.com/2014/11/03/certified-ubuntu-images-available-on-google-cloud-platform/
[4] https://cloud.google.com/sdk/gcloud/

Read more
Steph

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ - unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

Read more
Daniel Holbach

In the Community Q&A with Alan and Michael yesterday, I talked a bit about the sprint in Washington already, but I thought I’d write up a bit more about it again.

First of all: it was great to see a lot of old friends and new faces at the sprint. Especially with the two events (14.10 release and upcoming phone release) coming together, it was good to lock people up in various rooms and let them figure it out when nobody could run away easily. For me it was a great time to chat with lots of people and figure out if we’re still on track and if our old assumptions still made sense.  :-)

We were all locked up in a room as well...We were all locked up in a room as well…

What was pretty fantastic was the general vibe there. Everyone was crazy busy, but everybody seemed happy to see that their work of the last months and years is slowly coming together. There are still bugs to be fixed but we are close to getting the first Ubuntu phone ever out the door. Who would have thought that a couple of years ago?

It was great to catch up with people about our App Development story. There were a number of things we looked at during the sprint:

  • Up until now we had a Virtualbox image with Ubuntu and the SDK installed for people at training (or App Dev School) events, who didn’t have Ubuntu installed. This was a clunky solution, my beta testing at xda:devcon confirmed that. I sat down with Michael Vogt who encouraged me to look into providing something more akin to an “official ISO” and showed me the ropes in terms of creating seeds and how livecd-rootfs is used.
  • I had a number of conversations with XiaoGuo Liu, who works for Canonical as well, and has been testing our developer site and our tools for the last few months. He also wrote lots and lots of great articles about Ubuntu development in Chinese. We talked about providing our developer site in Chinese as well, how we could integrate code snippets more easily and many other things.
  • I had a many chats at the breakfast buffet with Zoltan and Zsombor of the SDK team (it always looked like we were there at the same time).  We talked about making fat packages easier to generate, my experiences with kits and many other things.
  • It was also great to catch up with David Callé who is working on scopes documentation. He’s just great!

What also liked a lot was being able to debug issues with the phone on the spot. I changed to the proposed channel, set it to read-write and installed debug symbols and voilà, grabbing the developer was never easier. My personal recommendation: make sure the problem happens around 12:00, stand in the hallway with your laptop attached to the phone and wait for the developer in charge to grab lunch. This way I could find out more about a couple of issues which are being fixed now.

It was also great to meet the non-Canonical folks at the sprint who worked on the Core Apps like crazy.

What I liked as well was our Berlin meet-up: we basically invited Berliners, ex-Berliners and honorary Berliners and went to a Mexican place. Wished I met those guys more often.

I also got my Ubuntu Pioneers T-Shirt. Thanks a lot! I’ll make sure to post a selfie (as everyone else :-)) soon.

Thanks a lot for a great sprint, now I’m looking forward to the upcoming Ubuntu Online Summit (12-14 Nov)! Make sure you register and add your sessions to the schedule!

Read more
Nicholas Skaggs

Sprinting in DC: Friday

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

Friday brings an end to an exciting week, and the faces of myself and those around me reflect the discussions, excitement, fun and lack of sleep this week has entailed.

Bubbles!
The first session of the day involved hanging out with the QA team while they heard feedback from various teams on issues with quality and process within there project. Always fun to hear about what causes different teams the most issues when it comes to testing.

Next I spent some time interviewing a couple folks for publishing later. In my case I interviewed Thomi from the QA team and Zoltan from the SDK team about the work going on within there teams and how the last cycle went. The team as a whole has been conducting interviews all week. Look for these interviews to appear on youtube in the coming weeks.

Thursday night while having a look through a book store, I came across an ad for ubuntu in Linux Voice magazine. It made me smile. The dream of running ubuntu on all my devices is becoming closer every day.


I'd like to thank all the community core app developers who joined us this week. Thanks for hanging out with us, providing feedback, and most of all for the creating the wonderful apps we have for the ubuntu phone. Your work has helped shaped the device and turn it into what it is today.

Looking back over the schedule there were sessions I wish I had been able to attend, and it was wonderful catching up with everyone. Sadly my flight home prevented me from attending the closing session and presumably getting a summary of some of these sessions. I can say I was delighted to talk and interact with the unity8 team on the next steps for unity8 on the desktop. I trust next cycle we as a community can do more around testing there work.

As I head to the airport for home, it's time to celebrate the release of utopic unicorn!

Read more
ssweeny

Ubuntu 14.10

I’m at a sprint in Washington, DC with my fellow Canonicalers gearing up for the commercial release of our phone OS (more on that later) but that doesn’t mean we’ve forgotten about the desktop and cloud.

Yesterday was another Ubuntu release day! We released Ubuntu 14.10, codenamed the Utopic Unicorn. Look for lots of subtle improvements to the desktop as we prepare some big things to come soon.

As usual, you can take a tour or go straight to the download page.

And while we’re at it, here’s to another 10 years of Ubuntu!

Read more